All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/sp... See more...
we have splunk gateway hf that sends alerts for diskusage if more then 80% we get this alert triggered more frequently in order to resolve this issue we need to clear the space on mount point /mnt/spk_fwdbck and this mount point has folder and subfolders like from last 3 years and has subfolders like  acs5x apc blackhole bpe cisco-ios oops paloalto pan_dc vpn windows unix threatgrid pan-ext ise ironport firewall f5gtmext f5-asm-tcp can this folders are safe to delete based on the year 2020 to 2023? can we delete complete previous years logs like 2020 if so does it effect anything. Trying to understad this concept. please help.
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_do... See more...
I have a query below that looked for an index and output to a csv file however. the size of the csv keep growing and i would like to purge it after 90 days. how do i do it?     index=suspicious_domain | rename "sources{}.source_name" as description, value as domain, last_updated as updated, mscore as weight | stats values(type) AS type latest(updated) as updated latest(weight) as weight latest(description) as description latest(associations_type) as associations_type latest(associations_name) as associations_name by domain | fields - count | outputlookup append=t suspicious_domain.csv  
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same c... See more...
Is there a way of capturing the x, y and z data from a stacked chart?   At the moment, my x and y are as follows x = build info y = duration z = process name. (various names stacked in the same column)  
for ip in ips:   query = f'search "{ip}" earliest=-1d index=main | stats count by index' job= service.jobs.create(query)     When i have 500 IP's i am only able to generate 100 jobs .. Is there... See more...
for ip in ips:   query = f'search "{ip}" earliest=-1d index=main | stats count by index' job= service.jobs.create(query)     When i have 500 IP's i am only able to generate 100 jobs .. Is there a way to generate 500 jobs ?
I'm working with a table of conversation data, all conversations start out as a bot chat and can be escalated to a human agent. The ConversationId remains persistent through the escalation. Each Con... See more...
I'm working with a table of conversation data, all conversations start out as a bot chat and can be escalated to a human agent. The ConversationId remains persistent through the escalation. Each ConversationEntry is a message, inbound or outbound, in a MessagingSession. ConversationId is the MessagingSession parent to the individual entries in/out All MessagingSessions I'm looking at will have an EventType=ChatbotEstablished, not all will have an EventType=BotEscalated. I can't figure out how to calculate the percentage of conversations that had an escalation. Below is my query and a stats output. I'm trying to figure out how I get BotEscalated/ChatbotEstablished. index=sfdc sourcetype=sfdc:conversationentry EntryType IN ("ChatbotEstablished", "BotEscalated") | stats count(ConversationId) as EntryCount by EntryType EntryType EntryCount BotEscalated 3 ChatbotEstablished 10
I am having an issue with splunk version 9.0.4.1 it is not giving me the correct amount of license usage for my splunk instance. All the data appears as required however the license usage is not bein... See more...
I am having an issue with splunk version 9.0.4.1 it is not giving me the correct amount of license usage for my splunk instance. All the data appears as required however the license usage is not being defined giving us unlimited usage. 
All, I am having this issue with my Splunk env. I keep getting Injestion_latency_gap_multiplier has exceeded configured value. It is saying it is an issue with my indexers. Any information would hel... See more...
All, I am having this issue with my Splunk env. I keep getting Injestion_latency_gap_multiplier has exceeded configured value. It is saying it is an issue with my indexers. Any information would help I am running version 9.0.4.1.
Hi everyone,    I've seen a few posts on here and elsewhere that seem to detail the same issue I'm having, but none of the solutions do the trick for me. Any help is appreciated.  The goal is t... See more...
Hi everyone,    I've seen a few posts on here and elsewhere that seem to detail the same issue I'm having, but none of the solutions do the trick for me. Any help is appreciated.  The goal is to flag users whose search engine queries (fieldname searched_for) contain words stored in a lookup table. Because those words could occur anywhere in the search query, wildcard matching is needed.   I have a lookup table called keywords.csv. It contains two columns:  keyword,classification splunk,test classification   The first use of the lookup works as it should, showing only events with keyword match anywhere in searched_for:       | search [| inputlookup keywords.csv | eval searched_for="*".keyword."*" | fields searched_for | format]         Next step is enrich the remaining events with the classification, and then filter out all events without a classification as such:       | lookup keywords.csv keyword AS searched_for OUTPUT classification | search classification=*         The problem is the above SPL only enriches events in which the keyword exactly matches searched_for. If I search in Google for "splunk", the events are enriched; If I search for "word splunk word", the event is not enriched. Is there a way around this without using | lookup? Or am I doing something wrong here? I'm out of ideas. I've tried: Prepending and appending * to the keyword in the lookup table (*splunk*) Adding lookup definition with matchtype WILDCARD(searched_for) Thought maybe the issue is due to searched_for being an evaluated field, so I changed the matchtype and SPL to the field "url". It is coming straight from the logs and contains the search query string. Still get no enrichment. Deleted and re-created the lookup, definition, and matchtype.
Hello, I need help to filter fields of an event and in this way reduce the size of the log before indexing it in splunk, I was reviewing the documentation and using ingest actions it is possible to... See more...
Hello, I need help to filter fields of an event and in this way reduce the size of the log before indexing it in splunk, I was reviewing the documentation and using ingest actions it is possible to exclude events based on regular expressions, however I do not need to exclude events if not specific fields
  how to extract the node name from the different GC source location: I have below sample three source location and I am looking for rex that can extract node name as "node02, Node03 and "web39". M... See more...
  how to extract the node name from the different GC source location: I have below sample three source location and I am looking for rex that can extract node name as "node02, Node03 and "web39". My rex command is not working. source= E:\total\int\ts1\Ddoss\node\node02\data\gc.log source=E:\total\int\ts1\Ddoss\swxx\node03\data\gc.log source=E:\total\int\ts1\Ddoss\web\web39\data\gc.log
Hi, I have an issue with our HEC service in our Splunk standalone installation (9.0.6). It simply does not complete the TCP connection for some unknown reason. Local FW is OFF. Ping works but TCP d... See more...
Hi, I have an issue with our HEC service in our Splunk standalone installation (9.0.6). It simply does not complete the TCP connection for some unknown reason. Local FW is OFF. Ping works but TCP does not complete the connection.   everything else works normally. I can connect to Splunk and search data, and universal forwarders report commonly (no deployment errors)... only HEC does not work as it should. HEC global settings from wireshark, the TCP retransmition can be seen but I can't find the root cause for it.   any idea of what could be happening? many thanks.      
Hello, FYI we had "The TCP output processor has paused the data flow" messages with extreme indexers slowness after OS updates and kernel update on linux Redhat 8.8.  
I need help to be able to capture variables in the MODSECURITY log. I can't create regular expressions well, is there an addon that can make it easier.
In my search results, I am getting IP and user details. I want to filter my search results if the same IP has been used by any user "*@xyz.com" in last 30 days.    
the large size logs like as below it's not a regular json data, therefore need to using rex to get fields A logs have name and uid B and C logs have uid and oid the dashboard accept input name, i... See more...
the large size logs like as below it's not a regular json data, therefore need to using rex to get fields A logs have name and uid B and C logs have uid and oid the dashboard accept input name, it allow multiple name with comma then using the name to find the uid and figure out the related uid and oid data from B logs and exclude from c logs so, I don't know how to  1. in a search statement substitute using the value of users be a keyword 2. combine the field data with comma for using  function search data in (...)    Thanks. -- for example: A logs: ... x1 ...uid=123... ... y2 ...uid=456... ... z3 ...uid=789... B logs: .... oid=989 ...uid=123 ... .... oid=566 ...uid=456 ... .... oid=486 ...uid=789 ... C logs: ...cancel_order... oid=989 ...uid=123 ... ...cancel_order... oid=566 ...uid=456 ... ...cancel_order... oid=486 ...uid=789 ... a dashboard has a input box text: users, and user can input multiple users with comma the value of users will be like "x1,z3" I wont to put the value in a search statement such us | makeresults | eval users="x1,z3" | eval names=replace(users, ",", " OR ")    =>excepted result: x1 OR z3 | search source="alog" $names$     => Substitute the names value into keyword | rex "name=(?<name>\S+)" | rex "uid=(?<uid>\d+)" | table name,uid | join type=left max=0 uid [ source="blog"  | rex "uid=(?<uid>\d+)" | rex "oid=(?<oid>\d+)" | search uid in (uids)    => uids combin the uid values with comma ex: (123,456,789) | table uid,oid ] | join type=left max=0 oid [ source="clog" cancel_order | rex "uid=(?<uid>\d+)" | rex "oid=(?<oid>\d+)" | search uid in (uids)    => uids combin the uid values with comma ex: (123,456,789) | table uid,oid,status ] | where isnull(status) | stats count(oid) by name
"The new Office 365 message trace logs have a delay throttle of 24 hours. I believe I understand the reasons behind this decision. Real-time information is important for SOC (Security Operations Cent... See more...
"The new Office 365 message trace logs have a delay throttle of 24 hours. I believe I understand the reasons behind this decision. Real-time information is important for SOC (Security Operations Center), and having a 24-hour gap in real-time data is a critical issue. One potential solution is to implement two Office 365 add-ons: one configured with the recommended settings and the other with the minimum possible delay time. Does this proposal make sense to anyone, and are there any associated risks?" Thank you for the help. 
Hi, i have created classic dashboard based on saved search because my saved search used as asset management search which contain a lot of fields. For now i need to create 3 input textbox and 1 drill... See more...
Hi, i have created classic dashboard based on saved search because my saved search used as asset management search which contain a lot of fields. For now i need to create 3 input textbox and 1 drilldown. Below are search that i used to match my token with search. | savedsearch "test 1" | search hostname=$hostname$, ip=$ip$, ID=$id$, location=$location$ However, search above doesn't work for the inputs field. Also i might need to add more inputs fields in future. Please assist me on this. Thank you 
We are currently ingesting ServiceNow Logs through the Splunk Add-on for Service Now TA. However, the logs aren't being parsed properly, as they are in a raw log format, which makes it increasingly d... See more...
We are currently ingesting ServiceNow Logs through the Splunk Add-on for Service Now TA. However, the logs aren't being parsed properly, as they are in a raw log format, which makes it increasingly difficult to build any kind of dashboard etc. Does anyone have any knowledge or experience in changing ServiceNow logs from a raw format to a structured format? Any help would be greatly appreciated
Hello Team, I have 2 input drilldown filter - filter1, filter2. Filter2 is based on the token from filter1.  When i click submit i should pass the token from filter1, filter2 to create a new dashboa... See more...
Hello Team, I have 2 input drilldown filter - filter1, filter2. Filter2 is based on the token from filter1.  When i click submit i should pass the token from filter1, filter2 to create a new dashboard.  
My server has windows version 2016 and it has splunk 7 , now i want to upgrade it to splunk 9 and 2019 version. what should be the flow to upgrade , so that i dont loose any old splunk 7 Data?