All Topics

Top

All Topics

A recent change to logs has broken my dashboard panels and reporting. I'm struggling to find the best way to modify my search criteria to pick up data prior to the change and after. It's a very simpl... See more...
A recent change to logs has broken my dashboard panels and reporting. I'm struggling to find the best way to modify my search criteria to pick up data prior to the change and after. It's a very simple change as single quotation marks were added around the field but it's giving me a big headache.  index=prd sourcetype=core Step=* Time=* |  timechart avg(Time) by Step span=1d Field in event log changed: FROM: Step=CONVERSION_APPLICATION TO: Step='CONVERSION_APPLICATION'  (with single quotation marks)
Hello: I recently started playing with the Risk framework, RBA etc. Most of my Risk Analysis dashboard is working within Enterprise Security - except for three (3) sections:   Risk Modifiers By A... See more...
Hello: I recently started playing with the Risk framework, RBA etc. Most of my Risk Analysis dashboard is working within Enterprise Security - except for three (3) sections:   Risk Modifiers By Annotations Risk Score By Annotations Risk Modifiers By Threat Object   For the annotations part - we do manually tag Mitre Attack tactics within our content, so not sure why these panels do not show anything. Also, does anyone know what savedsearches run in the background to populate these panels? I'd like to double check to make sure I have these enabled.   Thanks!        
Did you know that Splunk never stops thinking about how we can contribute to developing a robust cybersecurity workforce? This is one reason we continue to grow and scale our offerings within t... See more...
Did you know that Splunk never stops thinking about how we can contribute to developing a robust cybersecurity workforce? This is one reason we continue to grow and scale our offerings within the Splunk Academic Alliance Program to better serve students, faculty, and staff within non-profit universities, colleges, and schools.    Recruiting Event > What's a Splunktern? If you are currently participating in the Splunk Academic Alliance program through your college or university, then we have a recruiting event you won’t want to miss!  Splunk’s Global Emerging Talent Team wants you to join us for an exclusive Academic Alliance | Splunktern Open House event on October 18 at 1:00 p.m. - 1:45 p.m. (Pacific & Edinburgh ) to learn more about becoming a Splunktern.    Now that you’ve learned to use Splunk in the classroom, we invite you to learn how you can use Splunk to accelerate your career – starting with our award-winning internship opportunities!  Register for our Americas Academic Alliance | Splunktern Open House on October 18 here  Register for our Europe Academic Alliance | Splunktern Open House on October 18 here   Hope to see you at the virtual Open House on October 18.     – Callie Skokos on behalf of the Splunk Education and Global Emerging Talent Team
I'm having trouble getting a duration between two timestamps from some extracted fields. My search looks like this:   MySearchCriteria index=MyIndex source=MySource | stats list(ExtractedFieldSt... See more...
I'm having trouble getting a duration between two timestamps from some extracted fields. My search looks like this:   MySearchCriteria index=MyIndex source=MySource | stats list(ExtractedFieldStartTime) as MyStartTime, list(ExtractedFieldEndTime) as MyEndTime by AnotherField | eval MyStartUnix=strptime(MyStartTime, "%Y-%m-%dT%H:%M:%S") | eval MyEndUnix=strptime(MyEndTime, "%Y-%m-%dT%H:%M:%S") | eval diff=MyEndUnix-MyStartUnix | table MyStartTime MyEndTime MyStartUnix MyEndUnix diff   And my table is returned as: MyStartTime MyEndTime MyStartUnix MyEndUnix diff 2023-10-10T14:48:39 2023-10-10T14:15:15 1696963719.000000 1696961715.000000   2023-10-10T14:57:50 2023-10-10T13:56:53 1696964270.000000 1696960613.000000  
I need help in regex for key and value to be extracted from raw data, below regex working with xml_kv_extraction. While its working in regex101 but not in splunk with rex, any suggesstions. <(?<fiel... See more...
I need help in regex for key and value to be extracted from raw data, below regex working with xml_kv_extraction. While its working in regex101 but not in splunk with rex, any suggesstions. <(?<field_header>[^>]+)>(?<field_value>[^<]+)<\/\1>https://regex101.com/r/IBsMhK/1  eg: events with <field_title><field_header1>field_value1</field_header1><field_header2>field_value2</field_header2></field_title> Should appear fields as below. field title = <field_header1>field_value1</field_header1><field_header2>field_value2</field_header2> field_header1=field_value1 field_header2=field_value2   1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:36:22.742479", LAST_UPDATE_DATE="1997-10-10 13:36:22.74", ACTION="externalFactor", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactor><current>parker</current><keywordp><encrypted>true</encrypted><keywordp>******</keywordp></keywordp><boriskhan>boriskhan1-CMX_PRTY</boriskhan></externalFactor>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.388887", LAST_UPDATE_DATE="1997-10-10 13:03:58.388", ACTION="externalFactor.RESPONSE", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactorReturn><roleName>ROLE.CustomerManager</roleName><roleName>ROLE.DataSteward</roleName><pepres>false</pepres><externalFactor>false</externalFactor><parkeristrator>true</parkeristrator><current>parker</current></externalFactorReturn>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.384984", LAST_UPDATE_DATE="1997-10-10 13:03:58.384", ACTION="externalFactor.RESPONSE", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactorReturn><roleName>ROLE.CustomerManager</roleName><roleName>ROLE.DataSteward</roleName><pepres>false</pepres><externalFactor>false</externalFactor><parkeristrator>true</parkeristrator><current>parker</current></externalFactorReturn>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.384947", LAST_UPDATE_DATE="1997-10-10 13:03:58.384", ACTION="externalFactor.RESPONSE", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactorReturn><roleName>ROLE.CustomerManager</roleName><roleName>ROLE.DataSteward</roleName><pepres>false</pepres><externalFactor>false</externalFactor><parkeristrator>true</parkeristrator><current>parker</current></externalFactorReturn>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.378965", LAST_UPDATE_DATE="1997-10-10 13:03:58.378", ACTION="externalFactor", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactor><current>parker</current><keywordp><encrypted>true</encrypted><keywordp>******</keywordp></keywordp><boriskhan>boriskhan1-CMX_PRTY</boriskhan></externalFactor>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.374242", LAST_UPDATE_DATE="1997-10-10 13:03:58.373", ACTION="externalFactor", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactor><current>parker</current><keywordp><encrypted>true</encrypted><keywordp>******</keywordp></keywordp><boriskhan>boriskhan1-CMX_PRTY</boriskhan></externalFactor>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.374235", LAST_UPDATE_DATE="1997-10-10 13:03:58.373", ACTION="externalFactor", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactor><current>parker</current><keywordp><encrypted>true</encrypted><keywordp>******</keywordp></keywordp><boriskhan>boriskhan1-CMX_PRTY</boriskhan></externalFactor>" 1997-10-10 15:35:13.046, CREATE_DATE="1997-10-10 13:03:58.325953", LAST_UPDATE_DATE="1997-10-10 13:03:58.325", ACTION="externalFactor.RESPONSE", STATUS="info", DATA_STRING="<?xml version="1.0" encoding="UTF-8"?> <externalFactorReturn><roleName>ROLE.CustomerManager</roleName><roleName>ROLE.DataSteward</roleName><pepres>false</pepres><externalFactor>false</externalFactor><parkeristrator>true</parkeristrator><current>parker</current></externalFactorReturn>"    @priit @PriA @yonmost @jameshgibson @bnikhil0584 
[monitor:///var/log/suricata/eve.json] disabled=true sourcetype= suricata index = suricata Currently not seeing any  eve.json data coming from the suricata box to the splunk server? We do get ot... See more...
[monitor:///var/log/suricata/eve.json] disabled=true sourcetype= suricata index = suricata Currently not seeing any  eve.json data coming from the suricata box to the splunk server? We do get other logs like the syslog but no eve.json data? Tried throwing the TA out in the APPs folder on the server that didn't work. Added index = suricata to the server and it doesn't find it. Any help would be appreciated.  Instructions on deploying the app would be nice. 
I need to search a field called DNS_Matched, that has multi-value fields, for events that have one or more values that meet the criteria of the value ending with -admin, -vip, -mgt, or does not meet ... See more...
I need to search a field called DNS_Matched, that has multi-value fields, for events that have one or more values that meet the criteria of the value ending with -admin, -vip, -mgt, or does not meet any of those three. How can I do that?  Example  DNS_Matched host1 host1-vip host1-mgt host2  host2-admin host2-mgmt host2-vip
Go from in-context troubleshooting to in-depth log analysis in one click We’re making it easier for Observability Cloud users to seamlessly bring their log data into Splunk Cloud or Enterprise Plat... See more...
Go from in-context troubleshooting to in-depth log analysis in one click We’re making it easier for Observability Cloud users to seamlessly bring their log data into Splunk Cloud or Enterprise Platform. With our new “Open in Splunk platform” button available in our Log Observer UI, engineers wishing to analyze their logs in more detail can easily transpile the logs from Observability Cloud to Splunk Cloud and take advantage of best-in-class logging capabilities. Log Observer Connect users will be able to swiftly jump from in-context troubleshooting to deep-dive log investigations in one unified platform. What does that look like? Open your Log Observer in Observability Cloud: A new "Open in Splunk platform" appears! After one click, you are redirected into Splunk platform and your logs appear now in the "Search and Reporting"  interface. Main benefits Seamless access for consistency: With one click, your logs in Observability Cloud get transpiled into Splunk Search and Reporting and your filters are converted into SPL  Extended log analytics: Slice and dice your observability log data in greater detail for your most complex troubleshooting and monitoring use cases  Best of both worlds:  Hannah Montana knows it best. With Splunk, you can combine Splunk platform’s best-in-class and powerful search and visualization capabilities with Splunk Observability Cloud’s full-fidelity data in one flexible multipurpose platform.  How to get started If you're a Log Observer Connect user, the new "Open in Splunk platform" button will appear automatically! You can also take a look at our updated docs for more detailed information on how this works. Not a LO Connect customer yet? Reach out to your account representative to learn all about the many advantages of combining Splunk platform with Splunk Observability Cloud! 
I am aware of this site:  https://docs.splunk.com/Documentation/Splunk/7.2.10/Forwarding/Compatibilitybetweenforwardersandindexers I have several simple Splunk implementations (all functions run on ... See more...
I am aware of this site:  https://docs.splunk.com/Documentation/Splunk/7.2.10/Forwarding/Compatibilitybetweenforwardersandindexers I have several simple Splunk implementations (all functions run on one server).  My indexers are a mixture of 6.5 and 6.6. I plan on upgrading to 7.2.10 with the eventual goal of getting to the latest version. First, I'd like to understand what forwarders can communicate with indexers.  The link above relates to 7.0.0 and later.  I'm at 6.5/6.6 as stated earlier. Secondly, I know I need to upgrade splunk to various incremental versions before I get to 9.x.   What is the recommended path to upgrading to 9.x?  Since I'm 6.5 or 6.6, I believe my next is 7.2.10 (is that right?).  But what is the path after that? Thanks for the help!  
Can someone help me with the Splunk code that would be necessary to search for the Idemia Machines? Thank you Anthony
I have a field called DNS whos field values contain the hostname in the lookup. There is also another field called Identified_Host that has similar values. I will show the difference below: D... See more...
I have a field called DNS whos field values contain the hostname in the lookup. There is also another field called Identified_Host that has similar values. I will show the difference below: DNS Identified_Host host1.domain.com host1.domain.com host1-admin.domain.com host1.domain.com host1-mgt.domain.com host1.domain.com host2.domain.com host2.domain.com host2-admin.com host2.domain.com host2-mgt.admin.com host2.domain.com host3.domain.com host3.domain.com host3-admin.com host3.domain.com   From this example. it's shown that Indetified_Host is the main name of the host. I need to find out which hosts in Identified_Hosts have values in DNS with the same name but also end with -admin and/or -mgt. 
Hi, For some reason we cannot receive data to _interal or other indexes(all of them). Old indexes are still available through database. It looks like a generic problem, not related to any specific i... See more...
Hi, For some reason we cannot receive data to _interal or other indexes(all of them). Old indexes are still available through database. It looks like a generic problem, not related to any specific index. All I can see is _audit. Maybe it's ok to backup $SPLUNK_HOME/etc, and then reinstall splunk sw? or if possible restart some processes, or modify config file. input, output.conf   Rgds Geir
  Hello all, I have created and applied the configuration in props.conf file: SEDCMD-XXXXX = s/XXXXXX//g The field I wanted deleted is deleted from the logs... or it appears that way.  Looking a... See more...
  Hello all, I have created and applied the configuration in props.conf file: SEDCMD-XXXXX = s/XXXXXX//g The field I wanted deleted is deleted from the logs... or it appears that way.  Looking at the raw logs, the field/values are not there, but once I expand it (with the tab in the upper-left corner of the log entry), the field is there...? As if that field/value pair was not deleted but hidden?? The field shows in the left-side column of "All Fields" too. Hope someone can guide/explain it - I have not been able to find an answer (if it even has one)... Thanks!
Hello I am working on creating a dashboard to monitor some data flow info. this dashboard will have many panels so I was thinking I could hide them all when loaded and then only display the ones tha... See more...
Hello I am working on creating a dashboard to monitor some data flow info. this dashboard will have many panels so I was thinking I could hide them all when loaded and then only display the ones that the user selects in a multiselect input(maybe a better way?). I am unsure how to accomplish this though or if it can even be done. Does anyone have any ideas on how I could accomplish this task? Thanks for the help!
Hello, We currently have the following chart created. I would like to show each district split by week over a 4wk time period as shown in the Administration bar below. However, I'm struggling to get ... See more...
Hello, We currently have the following chart created. I would like to show each district split by week over a 4wk time period as shown in the Administration bar below. However, I'm struggling to get the results we want. Any assistance would be greatly appreciated!   Base search:   ```Grab badging data for the previous week``` index=* sourcetype="ms:sql" CARDNUM=* earliest=-4w@w latest=-0@w | bin span=1d _time | eval week_number=strftime(_time,"%U") | dedup _time CARDNUM | rename CARDNUM as badgeid ```lookup Ceridian and Active Directory and return fields associated with employeeID and badgeid``` | join type=left badgeid ```Use HR records to filter on only Active and LOA employees``` [search index=identities sourcetype="hr:ceridian" ("Employee Status"="Active" OR "Employee Status"="LOA*") earliest=-1d@d latest=@d | eval "Employee ID"=ltrim(tostring('Employee ID'),"0") | stats count by "Employee ID" | rename "Employee ID" as employeeID | fields - count ```Filter on Hybrid Remote users in Active Directory that are not Board Members and are in the Non-Branch region``` | lookup Employee_Data_AD_Extract.csv employeeID OUTPUT badgeid badgeid_1 RemoteStatus District employeeID Region] | where like(RemoteStatus,"%Hybrid%") AND NOT like(District,"Board Members") AND Region="Non-Branches" ```Calculate the number of badge check-ins in a given week by badgeid``` |stats latest(Region) as Region latest(employeeID) as employeeID latest(District) as District latest(RemoteStatus) as status count as "weekly_badge_in" by badgeid week_number ```Compensate for temporary badge check-in (primary badge= badgeid temporary badge =badgeid_1``` | append [|stats latest(Region) as Region latest(employeeID) as employeeID latest(District) as District latest(RemoteStatus) as status count as "weekly_badge_in" by badgeid_1 week_number | rename badgeid_1 as badgeid ] | eval interval=case('weekly_badge_in'>=3,">=3", 'weekly_badge_in'<3,"<3") ```Calulation to determine the number of employees within District that are Hybrid Remote but have not badged-in ``` | join District [| inputlookup Employee_Data_AD_Extract.csv | fields badgeid badgeid_1 RemoteStatus District employeeID Region | where like(RemoteStatus,"%Hybrid%") AND NOT like(District,"Board Members") AND Region="Non-Branches" | stats count as total by District]  
Hello, Our customer has decided to end use of Splunk in lieu of Sumo Logic, but we are looking to keep up internal use of Splunk due to 110GB worth of Perpetual licensing we have leftover.  We are ... See more...
Hello, Our customer has decided to end use of Splunk in lieu of Sumo Logic, but we are looking to keep up internal use of Splunk due to 110GB worth of Perpetual licensing we have leftover.  We are currently filtering out non-essentials, and for us one of the big players is linux syslog.  I am attempting to use transforms and props to filter out everything that aren't authentication failures.  The regular expression is looking for the string of text "authentication failure".  I tested my regex in regex101 and everything checks out, but when I turn on the syslog sourcetype, the proverbial flood gates are still opening up. Can someone take a look at these and let me know what looks wrong here?  The transforms are meant to bring in only events with "authentication failure" and toss out everything else. Props.conf [syslog] TRANSFORMS-set=set_parsing,set_null Transforms.conf [set_parse] REGEX = \bauthentication\b\s\bfailure\b DEST_KEY = queue FORMAT = indexQueue [set_null] REGEX = . DEST_KEY = queue FORMAT = nullQueue
Hello! we would like to extend our alarm for our users' monthly failed logon. I have created the following script. There is a problem with the table. The table is not showing me the "Workstation" an... See more...
Hello! we would like to extend our alarm for our users' monthly failed logon. I have created the following script. There is a problem with the table. The table is not showing me the "Workstation" and "Source_Network_Address", the Affected and the Count are working fine. I did some troubleshooting and found out that the command line with "stats count as" is the reason, as it works without that and shows everything except Count then of course. Does anyone have an idea how I can create a table and a counter? index=*..... (Account_Name="*" OR Group_Name="*") EventCode="4625" NOT EventCode IN ("4735", "4737", "4755") NOT Account_Name="*$*" Name | eval time=_time | eval Operator=mvindex(Account_Name, 0) | eval Affected=mvindex(Account_Name, 1) | eval Group=mvindex(Account_Name, 2) | eval Workstation=mvindex(Workstation_Name, 0) | eval Group=if(isnull(Group),Group_Name,Group) | eval Workstation=if(isnull(Workstation),"",Workstation) | eval Workstation=nullif(Workstation,"") | eval Affected=if(isnull(Affected),Account_Name,Affected) | eval ExpirationTime=if(isnull(Expiration_time),"",Expiration_time) | rex field=Message "(?<Message>[^\n]+)" | stats count as Count by Affected | table Affected, Workstation, Source_Network_Address, Count | sort -Count
Hi all, rex "WifiCountryDetails\W+(?<WifiCountryDetails>[\w*\s*]+)" We r using the above Rex for getting the Wi-Fi country details... But the problem is while fetching the data, if the Wi-Fi co... See more...
Hi all, rex "WifiCountryDetails\W+(?<WifiCountryDetails>[\w*\s*]+)" We r using the above Rex for getting the Wi-Fi country details... But the problem is while fetching the data, if the Wi-Fi country name is empty it automatically gathers the next field value in it.. But if the wificountrydetails are empty it has to show empty in the data, please let me know how to achieve it.
Hi, How do I limit the results per host? I have any (random) search query. I have 10 hosts. For each hosts, hundreds of events are shown. In a statistics table, I want to show only 1 event, per host... See more...
Hi, How do I limit the results per host? I have any (random) search query. I have 10 hosts. For each hosts, hundreds of events are shown. In a statistics table, I want to show only 1 event, per host. This way, I can check if each host has the logfile. It doesn't matter what the contents of the logfile are. How do I perform this search? This statistics table, or splunk dashboard, will have the following function: Check if log exists on every server
Hi, I would like to implement the following behavior in Dashboard studio: when a user clicks on a line chart showing the trend of a flow in terms of error count, I would like to show in a drill down... See more...
Hi, I would like to implement the following behavior in Dashboard studio: when a user clicks on a line chart showing the trend of a flow in terms of error count, I would like to show in a drill down graph the trend of all the errors per that specific flow. I tried with the following token : flow= click.value2  but it does not work. Any hint? thank you Best Regards