All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are going to integrate WAF logs from AWS SQS what is the best way to do it  ?  
Hi, I am using dbxquery to fetch the db data ,the db data is huge hence i am using maxrows=56406002. But the query is keeping loading for 30-40 mins and later throws an error as below even though... See more...
Hi, I am using dbxquery to fetch the db data ,the db data is huge hence i am using maxrows=56406002. But the query is keeping loading for 30-40 mins and later throws an error as below even though i am fetching only one year data 'Search auto-canceled' 'The search job has failed due to an error' '| dbxquery connection=XXX query="SELECT DATE, ENDDATE, BEGDA, ENDDA FROM PA2001 where BEGDA>=20160101 AND BEGDA<=20161231" maxrows=56406002 | streamstats count as SL_NO |table DATE ENDDATE BEGDA ENDDA SL_NO'
Hi All, I have setup a universal forwarder in windows machine to monitor static file which is in json format. The logs are being forwarded but the point is it is forwarded as single event like ... See more...
Hi All, I have setup a universal forwarder in windows machine to monitor static file which is in json format. The logs are being forwarded but the point is it is forwarded as single event like below :     {"Env": "someenv12”, "Name": "test12”, "feature": "TestFeature12”, "logLevel": "info", "Id": "1234", "date": 1652187242.57, "productName": “testproduct”, "process_name": “test process, "pid": 695, "process_status": "sleeping", "process_cpu_usage": 0.0, "process_ram_usage": 0.0, "metric_type": "system_process"} {"Env": "someenv1”3, "Name": "test13”, "feature": "TestFeature12”, "logLevel": “error”, "Id": "234", "date": 1652187342.57, "productName": “testproduct12”, "process_name": “test process, "pid": 685, "process_status": "sleeping", "process_cpu_usage": 0.0, "process_ram_usage": 0.0, "metric_type": “application_process} {"Env": "someenv14”, "Name": "test14”, "feature": "TestFeature13”, “info”: “error”, "Id": "2344", "date": 1672187342.57, "productName": “testproduct13”, "process_name": “test process, "pid": 695, "process_status": "sleeping", "process_cpu_usage": 0.0, "process_ram_usage": 0.0, "metric_type": “security”}     This entire thing is coming as one event. I have applied line breakers in props.conf file :     [test_sourcetype] SHOULD_LINEMERGE =false NO_BINARY_CHECK=true BREAK_ONLY_BEFORE={"Env" MUST_BREAK_AFTER=\"\} TIME_PREFIX=date TIMEFORMAT=%s%4N MAX_TIMESTAMP_LOOKAHEAD = 14     I have added it under /SplunkUniversalForwarder/etc/apps/splunk_TA_windows app/local/props. None of my line breaking is getting applied , please help me on this. Should I add props.conf under default folder ? Regards, NVP
Hi, I am creating a dashboard where the data is provided via CSV. So, I am using the inputlookup command.  However, I need to search on one specific field (or column) on the CSV and I am current... See more...
Hi, I am creating a dashboard where the data is provided via CSV. So, I am using the inputlookup command.  However, I need to search on one specific field (or column) on the CSV and I am currently using this but it is not working:   | inputlookup ABC | search Device Name = "sdf"   Can you please help?
Hi, The network request data collected by the ios device is lost. There is often a period of time when the data is not available. Android has no such problem.
Hello, we are planning to Upgrade from verison 8.0.1 to 8.26 (the latest version), but we see that CentOS reaches End of Life on December 31st. Does this version of Splunk support CentOS anymore? ... See more...
Hello, we are planning to Upgrade from verison 8.0.1 to 8.26 (the latest version), but we see that CentOS reaches End of Life on December 31st. Does this version of Splunk support CentOS anymore?  
Hello,  I am trying to create a detection of the AWS exploitation tool Pacu.py. It is to detect the use of the enumeration tool within Pacu.py, which executes the following AWS commands in less tha... See more...
Hello,  I am trying to create a detection of the AWS exploitation tool Pacu.py. It is to detect the use of the enumeration tool within Pacu.py, which executes the following AWS commands in less than a second: ListUserPolicies GetCallerIdentity ListGroupsForUser ListAttachedUserPolicies Timeframe: First Event: 2022-05-19 10:02:25 Last Event: 2022-05-19 10:02:26 Each command generates a separate event so I was wondering if it is possible to create a search which detects these command executed from the same account within a 1 second timeframe?  I am unsure how to specify a time window so if you could help, that would be greatly appreciated.  Query index="aws-cloudtrail" "GetCallerIdentity" OR "ListUserPolicies" OR "ListGroupsForUser" OR "ListAttachedUserPolicies" | table _time, principalId, userName, aws_account_id, sourceIPAddress, user_agent, command Many Thanks
Hi Everyone, I am trying to ingest the change related data from database using DB connect and using the rising column to ingest the same. I have specified the changerequestID as the rising column. ... See more...
Hi Everyone, I am trying to ingest the change related data from database using DB connect and using the rising column to ingest the same. I have specified the changerequestID as the rising column. Data has other fields as well such as creationtime,Lastmodifiedtime,Solvedtime etc.If a change is open then the entry in the database for column values such as LastModifiedtime,Solvedtime can be blank so in that case my query is if the these values get updated in the DB after sometime but since the entry before updating has already been ingested in splunk via rising column then will it get ingested in splunk? Thanks
Hi Some users complain about Splunk search. Before Splunk, they simply open the log file and look for issues. 1-As you know log files start from the first line and finish at the last line. While S... See more...
Hi Some users complain about Splunk search. Before Splunk, they simply open the log file and look for issues. 1-As you know log files start from the first line and finish at the last line. While Splunk search reverse newest event show first. 2-another issue is they can’t trace transactions with Splunk easily. because of Splunk results limitation they should set a smaller time range and imagine how hard is when in each second over 1000 transactions occurred.   FYI: Try to use “sort _raw” but it is slow. Try to use the transaction command but they have unstructured transactions not easy to find them. Try to remove the limitation but it will be slow. So they prefer to use log files instead of Splunk. How can I help them to use Splunk effectively? Any idea? Thanks
I am running the ldapsearch in scheduled report  which initially runs outputlookup and getting the below error message  the ldapsearch returns 250 results and working properly once I am running it ... See more...
I am running the ldapsearch in scheduled report  which initially runs outputlookup and getting the below error message  the ldapsearch returns 250 results and working properly once I am running it manually  05-26-2022 04:08:16.147 +0300 ERROR SearchMessages - orig_component="script" app="amdocscybermain" sid="scheduler__odeliab__amdocscybermain__RMD59063549f9a2aae97_at_1653527160_38496" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'ldapsearch' returned error code 15. . app = amdocscybermain component = SearchMessages date_zone = 180 eventtype = err0r error host = illinissplnksh01 index = _internal log_level = ERROR message = External search command 'ldapsearch' returned error code 15. . sid = scheduler__odeliab__amdocscybermain__RMD59063549f9a2aae97_at_1653527160_38496 source = /opt/splunk/var/log/splunk/search_messages.log sourcetype = splunk_search_messages
Hi All, I set ignoreOlderThan = 10d and it worked as expected, the files older than 10 days were not searched. Once I set that value to 30d, all files came out. So far it is working as expected. ... See more...
Hi All, I set ignoreOlderThan = 10d and it worked as expected, the files older than 10 days were not searched. Once I set that value to 30d, all files came out. So far it is working as expected. However, after I set it back to 10d, there was no difference and all files including those ones older than 10 days came out as well, is this as expected? I have restarted both the UF and server. Thanks.
Hi All, I am using Splunk Stream App for long time, suddenly some problem raised: 1-Netflow not working and no data is indexed. even after installing New Version of Stream App (8.0.2) configuration... See more...
Hi All, I am using Splunk Stream App for long time, suddenly some problem raised: 1-Netflow not working and no data is indexed. even after installing New Version of Stream App (8.0.2) configuration page does not load. when I click on "new stream\metadata stream" nothing loads so i cant configure the app. 2- I see these alerts. and i did everything that are said in community such as "rename the current mongo folder to old" or "splunk clean kvstore --local". but nothing worked for me. Failed to start KV Store process. See mongod.log and splunkd.log for details. KV Store changed status to failed. KVStore process terminated.. KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. Thank You  
Hello Splunkers!! Can you help me understand that how can I compare the last week vs 3 hours of data in Splunk.  Previously I have compared the current week and previous week of data by using the ... See more...
Hello Splunkers!! Can you help me understand that how can I compare the last week vs 3 hours of data in Splunk.  Previously I have compared the current week and previous week of data by using the timewrap command but last week vs 3 hours in creating confusion for me. Please provide me the solution and suggestion. Below screenshot belongs to Newrelic.
How do I generate SSL certificate for the Event Service server?  There are documentation on how to secure the  Controller and EUM servers but I didn't find documentation on how to generate the .CSR c... See more...
How do I generate SSL certificate for the Event Service server?  There are documentation on how to secure the  Controller and EUM servers but I didn't find documentation on how to generate the .CSR certificate for the event service.  Our Event Service SSL certificate has expired  and I am not sure how it was created and imported on the keystore.  Thank you. Ferhana
I created AWS EC2 instance and installed Splunk Enterprise on that. Opened all rules for port 8000 and 8089. I can open this Splunk GUI from India. But whenever my peer is trying to open from US he g... See more...
I created AWS EC2 instance and installed Splunk Enterprise on that. Opened all rules for port 8000 and 8089. I can open this Splunk GUI from India. But whenever my peer is trying to open from US he got the message "Server Error". Is this anything related to EC2 security groups? Or password issue? No logs are recording in internal data as well. Could you please us to fix this?
Hi friends, I just would like to know if I need a different HEC token for every source type? I couldn't find any documentation related to it Thanks.
Hi Splunkers, Is it possible to make a dynamic token results based on the radio and multiple link with same token value. I want to achieve a result like this. If I click value "1" on the radi... See more...
Hi Splunkers, Is it possible to make a dynamic token results based on the radio and multiple link with same token value. I want to achieve a result like this. If I click value "1" on the radio button, link 1 input will show up and the default value for this link input will pass on the table input it should be "A". If I change the value "1" to value "2", link 2 input will show up and default value for it will update the table it should be "D" Below is my sample query. Appreciate your help on this. Thanks inadvance! <form> <label>Multi Link Input</label> <fieldset submitButton="false"> <input type="radio" token="select_link" searchWhenChanged="true"> <label>Select Link List</label> <choice value="1">1</choice> <choice value="2">2</choice> <change> <condition value="2"> <set token="show_2">true</set> <unset token="show_1"></unset> </condition> <condition value="1"> <set token="show_1">true</set> <unset token="show_2"></unset> </condition> </change> <default>1</default> </input> <input type="link" token="select" searchWhenChanged="true" depends="$show_1$"> <label>1</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <default>A</default> </input> <input type="link" token="select" searchWhenChanged="true" depends="$show_2$"> <label>2</label> <choice value="D">D</choice> <choice value="E">E</choice> <choice value="F">F</choice> <default>D</default> </input> </fieldset> <row> <panel> <event> <search> <query>| makeresults | eval $select$ = "$select$"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>
Hi, Paloalto is one of our largest log sources, and we have been ingesting many different types of pan logs for years via the Splunk_TA_paloalto add-on for Splunk. The firewalls are sending logs to... See more...
Hi, Paloalto is one of our largest log sources, and we have been ingesting many different types of pan logs for years via the Splunk_TA_paloalto add-on for Splunk. The firewalls are sending logs to a syslog server also functioning as a UF. On 04/14/22 we noticed that the pan:threat sourcetype has started to grow in volume. Its the roughly the same amount of events, but now the events are on average x2, x3, up to x5 larger in size of bytes.  I also noticed that some of the fields are receiving the wrong data. When I track this back, both issues started happening on 4/14. I have also determined that these larger logs are all coming from one HA pair, out of dozens.  I am having a very tough time coming up with explanations for the growth, and options to fix the issue on the Splunk side. Has anyone every seen this or have any recommendations on how I may resolve the issue?
I have a field called "Risk Type" that has categorical data associated with the type of risk of an event. For example, for one event it might say "Type - Network", but for another event that has more... See more...
I have a field called "Risk Type" that has categorical data associated with the type of risk of an event. For example, for one event it might say "Type - Network", but for another event that has more than one risk type it will say "Type - Network Type - USB Type - Data" where the three risk types are in a single value. What I want to do is to extract each type as a separate value, so for event X there would be three entries for each type. Ex: Event X Type - Network Event X Type - USB Event X Type - Data I tried doing mvexpand but this did not separate each type into multiple values. I also thought of using the rex command but I do not know what the regular expression would be to do this. How do I accomplish this?
I've done this in the past and it works to get data for today up to the latest 5 minute span, but I'm hoping to speed it up with tstats.   index="foo" sourcetype="foo" earliest=-0d@d latest=[|maker... See more...
I've done this in the past and it works to get data for today up to the latest 5 minute span, but I'm hoping to speed it up with tstats.   index="foo" sourcetype="foo" earliest=-0d@d latest=[|makeresults | eval snap=floor(now()/300)*300 | return $snap] | stats sum(b) as bytes ....     I tried this but it doesn't work.   | tstats sum(stuff.b)as bytes from datamodel="mymodel.stuff" where index="foo" sourcetype="foo" earliest=-0d@d latest=[|makeresults | eval snap=floor(now()/300)*300 | return $snap] | ....      I could do this potentially but it doesn't seem to be much better and quite frankly is a bit more confusing.   | tstats sum(stuff.b)as bytes from datamodel="mymodel.stuff" where index="foo" sourcetype="foo" earliest=-0d@d by _time span=1min | where _time < floor(now()/300)*300 | rename stuff.* as * | stats sum(bytes) as bytes ....     If there is anyway to do it in the tstats  command that would be great ... thoughts?