All Topics

Top

All Topics

Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for... See more...
Currently, our company successfully collects most of the Microsoft 365 logs, but we are facing challenges with gathering the security logs. We aim to comprehensively collect all security logs for Microsoft 365, encompassing elements such as Intune, Defender, and more. Could you please provide advice on how to effectively obtain all the security logs for Microsoft 365?
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C ... See more...
Hello, I have the following example json data:       spec: { field1: X, field2: Y, field3: Z, containers: [ { name: A privileged: true }, { name: B }, { name: C privileged: true } ] }       I'm trying to write a query that only returns privileged containers. I've been trying to use mvfilter but that won't return the name of the container. Here's what I was trying to do:       index=MyIndex spec.containers{}.privileged=true | eval priv_containers=mvfilter(match('spec.containers{}.privileged',"true")) | stats values(priv_containers) count by field1, field2, field3       This will, however, just return "true" in the priv_containers values column, instead of the container's name. What would be the best way to accomplish that?
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see... See more...
I have an unstable data feed that sometimes only reports on a fraction of all assets.  I do not want such periods to show any number.  The best way I can figure to exclude those time period is to see if there is a sudden drop of some sort of total.  So, I set up a condition after timechart like this: | addtotals | delta "Total" as delta | foreach * [eval <<FIELD>> = if(-delta > Total OR Total < 5000, null(), '<<FIELD>>')] The algorithm works well for Total, and for some series in timechart, but not for all, not all the time. Here are two emulations using index=_internal on my laptop.  One groups by source, the other groups by sourcetype. index=_internal earliest=-7d | timechart span=2h count by source ``` data emulation 1 ``` With group by source, all series seem to blank out as expected. Now, I can run the same tally by sourcetype, like thus index=_internal earliest=-7d | timechart span=2h count by sourcetype ``` data emulation 2 ``` This time, all gaps have at least one series that is not null; some series go to zero instead of null, some even obviously above zero. What is the determining factor here? If you have suggestion about alternative approaches, I would also appreciate.
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orp... See more...
Hello guys,   We have some orphaned saves searches in our splunk cloud instance that are viewable via the following Rest search: | rest splunk_server=local /servicesNS/-/-/saved/searches add_orphan_field=yes count=0 However when looking at the searches pulled in searches > Reports and Alerting they do not show up. There are also zero saved searches viewable under Settings > All Configurations > Reassign Knowledge Objects > Orphaned (with all filters on all)   We are trying to reassign these searches via Rest with the following example syntax: curl -sk -H 'Authorization: Bearer <token>' -d 'owner=<name of valid owner>'   https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29 But are receiving  the following error This is not an issue with the id as the following is able to pull saved search info. curl -sk -H 'Authorization: Bearer <token>'  https://<splunk cloud.com>:8089/servicesNS/nobody/search/saved/searches/%28%20Customers-LoyaltyEnrollment_1.0%20%29   Does anyone have a better syntax to use to post this owner change to the saved searches?
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i c... See more...
I currently have events that include load times and events that include header colour for my app. These events both have the user's session id. How do I join the two events based on session Id so i can see the load time based on header colour? Query for Load time:   index="main" measurement_type=loadTime screen_class_name=HomeFragment    Query for HeaderColour:   index=api_analytics sourcetype=api_analytics | `expand_api_analytics` | search iri=home/header | spath input=analytic path=session_id output=session_id  
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.08... See more...
Hi Splunk community,  I've JSON logs and I wanted to remove the prefix from the events and capture from {"successfulSetoflog until AZURE API Health Event"} Sample Event: 2020-02-10T17:42:41.088Z 775ab4c6-ccc3-600b-9c84-124320628f00 {"records": [{"value": {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}}]} The expected output would be  {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................  1}, "detail-case": "AZURE API Health Event"}
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", ... See more...
I have a JSON file that is formatted like this   { "meta": { "serverTime": 1692112678688.699, "agentsReady": true, "status": "success", "token": "ABCDEFG", "user": { "userName": "username", "role": "ADMIN" } }, "vulnerabilities": [ { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" }, { "id": "pcysys_linux_0.10000000", "creation_time": 1690581702599.0, "name": "name", "summary": "summary", "found_on": "Host: 10.10.10.10", "target": "Host", "target_id": "abcdefg", "port": 445, "protocol": "abc", "severity": 3.5, "priority": null, "insight": "this is the insight", "remediation": "this is the remediation" } ] }   I am trying to ingest just the vulnerabilities. It works when I try it in Splunk UI but when I save it in my props.conf file it doesn't split it correctly and the id from one section gets appended to the end of the previous one.   Here is what I am trying. [sourcetype] LINE_BREAKER = }(,[\r\n]+) SHOULD_LINEMERGE = false NO_BINARY_CHECK = 1
At Splunk University, the precursor event to our Splunk users conference called .conf23, I had the privilege of meeting Tan Jia Le, the winner of the prestigious "12th Singapore Cyber Conquest" conte... See more...
At Splunk University, the precursor event to our Splunk users conference called .conf23, I had the privilege of meeting Tan Jia Le, the winner of the prestigious "12th Singapore Cyber Conquest" contest. Jia Le, a student with a passion for cybersecurity, graciously shared his story about the Cyber Conquest and his experience with Splunk, the powerful security platform that played a pivotal role in the competition. The Cyber Conquest was a thrilling contest that brought together students from various Institutes of Higher-Learning in Singapore, along with teams from other ASEAN countries. The participants were tasked with using Splunk's Boss of the SOC (BOTS) suite of security tools to answer challenging questions. The faster and more accurately they responded, the more points they earned.     I was curious to learn how Jia Le's team emerged as the winners. He explained that their success was attributed to their strategic approach. They swiftly tackled questions they knew how to answer, and if they faced challenges, they didn't dwell on them for too long but moved on to other tasks. This efficient strategy allowed them to surpass all other teams with a comfortable margin. It was evident that the contestants needed to be well-versed in using Splunk to excel in the competition. Jia Le and his teammate were aware of this from the beginning, and they prepared accordingly. Prior to the contest, they attended 1-2 short Splunk trainings, and they were granted free access to relevant Splunk courses. They also familiarized themselves with BOTS by exploring previous versions of the competition on TryHackMe and studying online write-ups from past participants. As the winning team, Jia Le and his teammate were awarded an all-expenses paid trip to Splunk University and .conf23 in Las Vegas. At Splunk University, Jia Le attended the Architect and the SOAR Administrator Bootcamps. He found the Architect Bootcamp enlightening, although some aspects were beyond his current role as an end-user. On the other hand, he deeply enjoyed the SOAR Administrator course, as he had a keen interest in automating tasks and saw firsthand how the solution could benefit a Security Operations Team.     Through the Splunk University courses, Jia Le gained a deeper appreciation for Splunk's suite of tools, and he believes this knowledge will be beneficial in his future roles. He now has a better understanding of the tools and capabilities required in a modern SOC to combat ever-evolving cyber threats, and he believes this knowledge will support future SOC implementations and improvements. Jia Le began using Splunk in 2018 during an internship. During his journey in cybersecurity, he has observed that the most common use case of Splunk in the industry is as a Security Information and Event Management (SIEM) tool - ingesting logs from various sources and using Splunk to search through them with ease. “My favorite aspect of Splunk is the combination of SQL-like syntax and the ability to chain complex operations using pipes (|),” said Jia Le. “These features allow me to generate intriguing insights from logs, making my cybersecurity tasks more efficient and insightful.” For Jia Le, however, the best part of the contest was not just winning. “It was a great opportunity to showcase my skills, make connections in the cybersecurity community, and celebrate our shared passion for defending against cyber threats using Splunk software,” he said. From using Splunk as an intern, to winning the "12th Singapore Cyber Conquest," to attending Splunk University, Jia Le believes he is even better equipped with the skills needed to pave the way to a promising career in cybersecurity defense.   We really appreciate Jia Le’s willingness to share his story!  If you have a similar story, please reach out to me, cskokos@splunk.com.     -- Callie Skokos on Behalf of the Splunk Education Crew  
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any... See more...
Hello I need to monitor a python script ive developed so far it does indeed have logging object, logging.info , requests and mysql sqlhooks logs shown when pyagent run is started, but i cant see any reference to my app so far at the server. any recommendation would make me really grateful My case is similar to this one, but im on v23 python agent https://stackoverflow.com/questions/69161757/do-appdynamics-python-agent-supports-barebone-python-scripts-w-o-wsgi-or-gunicor https://docs.appdynamics.com/appd/23.x/23.9/en/application-monitoring/install-app-server-agents/python-agent/python-supported-environments I dont need necessarily metrics monitoring, but i do really need to monitor the events happening in the script Do you folks have any suggestion if its possible that AppDynamics agent hook only on asyncio library, performing or simulating , the same layer that the java proxy agent is sniffing into the python VM? . Is it possible to send this 'stimulti' straight to the another java program , that would do the BT call? if https://rob-blackbourn.medium.com/asgi-event-loop-gotcha-76da9715e36d https://www.uvicorn.org/#fastapi https://docs.appdynamics.com/appd/21.x/21.4/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/instrument-jvms-started-by-batch-or-cron-jobs Finally,  i could event dare to look further to use opentelemetry for my case, in order to collect the main points Is opentelemetry a standard feature for AppDynamics? is it extra paid option? https://github.com/Appdynamics/opentelemetry-collector-contrib  https://opentelemetry.io/docs/instrumentation/python/automatic/ https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/logging/logging.html
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, a... See more...
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, and therefore there's multiple events in each of these datasets with a given value of this field. My expected result is that for each event in the first dataset with a particular value of that field, I will end up producing n events in the resulting dataset, where n is the number of events in the second dataset that have that particular value in the field. So for example, if I have 3 events with that field value in dataset A and 4 events with that particular field value in dataset B, then I expect to have 12 events in the result dataset (after the merge). What Splunk command/s would be useful to merge these datasets in this fashion? 
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, ... See more...
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, searching all over google and the splunk community page I have looked at multiple errors similar from other apps and none of the solutions helped. Permissions are correct as well any help would be greatly appreciated!  The full error is "Unable to initialize modular input "redfish" defined in the app "TA-redfish-add-on-for-splunk" : introspecting scheme=redfish: script running failed (PID 4535 exited with code 1)"
Watch Principal Threat Researcher, Michael Haag, provide an overview of:  11 new analytic stories developed by the Splunk Threat Research Team in Q2 (May - July) related to adversary tradecraft, ... See more...
Watch Principal Threat Researcher, Michael Haag, provide an overview of:  11 new analytic stories developed by the Splunk Threat Research Team in Q2 (May - July) related to adversary tradecraft, ransomware, and emerging threats A new machine learning detection A new Splunk SOAR hunting playbook   Check out these other great resources to learn more about security content released in Q2:  Amadey Threat Analysis and Detections  Don’t Get a PaperCut: Analyzing CVE-2023-27350 I am the Snake Now: Analysis of Snake Malware Do Not Cross the ‘RedLine’ Stealer: Detections and Analysis Machine Learning in Security: Detect DNS Data Exfiltration Using Deep Learning  Threat Hunting with Playbooks 
I need to get the  list of Adhoc Searches and Saved search running by user in Audit logs. how to differentiate these searches in _audit logs, is there any specific keyword to identify the searches 
Hello Splunkers ! I am looking for a way to monitor and retrieve the user that logged into my Linux machine, but only the user part of the root or wheel groups, or who are present in any sudoers f... See more...
Hello Splunkers ! I am looking for a way to monitor and retrieve the user that logged into my Linux machine, but only the user part of the root or wheel groups, or who are present in any sudoers file. I was able to get the user who ssh into my machine using the 'var/log/secure' file but my challenge was to check if the user "have a lot of rights or not". I have some idea in mind to achieve this but maybe there were something out of the box with Splunk or the TA Nix Add-On.... If somebody already tried to so something similar, any help would be appreciated ! Thanks ! GaetanVP
I have a search query that takes a search value from a drop down.  Example Drop down has values All A B Query uses  | where productType="$dropdown$" How do I remove the where clause if All is... See more...
I have a search query that takes a search value from a drop down.  Example Drop down has values All A B Query uses  | where productType="$dropdown$" How do I remove the where clause if All is selected. There is no productType - All
Hi Team,  I am trying to install the Machine Agent on a Unix Machine hosted in AWS .  Installation is fine, i can see the Appd Machine Agent service up and running. But it is not getting registered ... See more...
Hi Team,  I am trying to install the Machine Agent on a Unix Machine hosted in AWS .  Installation is fine, i can see the Appd Machine Agent service up and running. But it is not getting registered to the controller .  When i check the logs, it shows up a Timed out error.  Is there anything specific i need to do for AWS Ec2 instances that has machine agents installed there 
My splunk instance is running in GMT and I want to schedule an alert as per China time.  */5 21-23,0-13 * * 0-5 This is the cron. The logic is to trigger the alert every 5minutes from Monday to frid... See more...
My splunk instance is running in GMT and I want to schedule an alert as per China time.  */5 21-23,0-13 * * 0-5 This is the cron. The logic is to trigger the alert every 5minutes from Monday to friday 5AM till 10 PM china Time but the alert is getting triggered on Sunday as well. How can we cutomise the cron?  
Hi, I am attempting to determine if Splunk is installed on all of our local systems within our environment. Is there a way to check this through Tags, the Windows Registry (regedit), or ParentProce... See more...
Hi, I am attempting to determine if Splunk is installed on all of our local systems within our environment. Is there a way to check this through Tags, the Windows Registry (regedit), or ParentProcessname or a PowerShell script? If so, could you please provide guidance on the process? Thanks
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is trigge... See more...
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is triggered for three consecutive days, the alarm is triggered. The current SPL: index="xx" | bin _time span=15m | stats dc(dest_port) as dc_ports by _time src_ip dest_ip | where dc_ports > 10 | streamstats count as consecutive_triggers by src_ip dest_ip reset_on_change=Ture | where consecutive_triggers>=5   Next, I don't know how to query the trigger for the same period for three consecutive days.
Hello,   When I enable  sslVerifyServerCert  in server.conf under [sslConfig], I am seeing the following errors. From where does it understands that there is an IP address mismatch? If it trying ... See more...
Hello,   When I enable  sslVerifyServerCert  in server.conf under [sslConfig], I am seeing the following errors. From where does it understands that there is an IP address mismatch? If it trying to resolve the CN mentioned in the certificate?     09-11-2023 11:40:01.284 +0300 WARN X509Verify [1034989 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:01.285 +0300 WARN X509Verify [1034990 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:01.286 +0300 WARN X509Verify [1034986 TcpChannelThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:03.998 +0300 WARN X509Verify [1034777 DistHealthReporter] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:03.998 +0300 WARN X509Verify [1034786 DistributedPeerMonitorThread] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch" 09-11-2023 11:40:04.005 +0300 WARN X509Verify [1034777 DistHealthReporter] - Server X509 certificate (CN=searche.test.local,OU=NIL,O=TEST,L=Loc,ST=Sta,C=NIL) failed validation; error=64, reason="IP addrsearche mismatch"     Cheers.