Five days into this journey, and I’m beginning to see how much there is to learn and adapt to. While I haven’t delved deeply into technical work yet, the process of setting up and refining my tools has been eye-opening. Today, I focused on improving my threat intelligence gathering system with the help of AI language models and automation.

Streamlining Threat Intelligence
I’ve specifically set up a few popular AI language modeling tools to assist me with my threat intel process. Testing these tools through automation platforms feels like speaking with colleagues to brainstorm solutions, only these colleagues never tire, and their responses evolve based on my input. For instance, I’ve used simple automation to consolidate and summarize my news feeds, allowing me to process information more quickly and focus on key insights.
Additionally, I’ve started working with a cloud-based SOAR platform, integrating AI tools to find the ideal workflow for my threat intel needs. These tools are more than just assistants; they’re shaping the way I approach cybersecurity, enabling a new level of vulnerability and adaptability in my work.

Speaking the Language of AI
One challenge I’ve faced is learning how to communicate effectively with these tools. Complex thoughts often require me to structure my questions as logical objects, breaking them down into clear, digestible components. Interestingly, different tools seem to have distinct “personalities.” Some excel at technical depth but demand precise inputs, while others are more conversational but lack advanced capabilities. Adapting my communication style to match the strengths of each tool has been a fascinating exercise in flexibility and critical thinking.

Social Engineering and the Human Element
Beyond the tools, I’ve been reflecting on the growing threat of social engineering. Scams involving QR codes, phishing, and smishing are evolving rapidly, and AI is amplifying these risks. Articles like “USPS Parcel Cannot Be Cleared Scam” and “Cybercriminals Leveraging AI for Scams” emphasize the increasing difficulty in distinguishing genuine interactions from malicious ones. Deepfakes and AI-assisted reconnaissance are creating a reality where deception is easier and more convincing than ever.
The Broader Perspective
As I continue this journey, I find myself growing more disorganized, but in a proactive way. My broad perspective is shifting as I focus on nuances within cybersecurity. Each layer I uncover feels like moving from being a spectator of a sport to playing a specific position with a unique perspective. This analogy underscores how much I’m learning and how these new insights are adding depth to my understanding.
This duality of broadening and focusing resonates with the article “Australiansuper Turns on Security Copilot,” which highlights how AI can streamline workflows while uncovering new layers of complexity. It’s a reminder that growth often feels chaotic but is ultimately productive.

Empowering Others Through Reflection
I’ve learned more in these five days than I have in the past two years. It’s not just about setting up tools but about transforming how I think and work. By sharing this journey, I hope to empower others to embrace AI as a collaborative partner and to approach cybersecurity with curiosity and resilience.
A Call to Action: How do you think security professionals should approach AI to balance innovation and caution? I’d love to hear your thoughts as we navigate this evolving landscape together.