All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Please let me know how I would write Props Configuration file for this csv file. Segment of sample data for this csv file is given below. Any help will be highly appreciated, thank you!   ... See more...
Hello, Please let me know how I would write Props Configuration file for this csv file. Segment of sample data for this csv file is given below. Any help will be highly appreciated, thank you!        
I need to do an analysis on API calls using logs, like avg, min, max, percentile99, percentil95, percentile99 response time, and also hits per second. So, if I have events like below : /data/user... See more...
I need to do an analysis on API calls using logs, like avg, min, max, percentile99, percentil95, percentile99 response time, and also hits per second. So, if I have events like below : /data/users/1443 | 0.5 sec /data/users/2232 | 0.2 sec /data/users/39 | 0.2 sec Expectation: I want them to be grouped like below, as per their API pattern : proxy max_response_time /data/users/{id} | 0.5 sec   These path variables (like {id}) can be numerical or can be a string with special characters I have about 3000 such API patterns which have path variables in them,  they can be categorized into 3 types, those that have a path variable only at the end, those that have 1 or more path variables only in the middle, and those that have 1 or more path variables in the middle as well as in the end. Note: there are no arguments after the API i.e. like /data/view/{name}/pagecount?age=x. There will be just the URI part proxy method request_time /data/users/{id} POST 0.046 /server/healthcheck/check/up GET 0.001 /data/commons/people/multi_upsert POST 0.141 /store/org/manufacturing/multi_read POST 0.363 /data/users/{id}/homepage/{name} POST 0.084 /data/view/{name}/pagecount PUT 0.043 Category 1 (path variable only at the end) : /data/users/{id} POST 0.046 Category 2 (1 or more path variables only in the middle) : /data/view/{name}/pagecount PUT 0.043 /data/view/{name}/details/{type}/pagecount PUT 0.043 Category 3 (1 or more path variables only in the middle and also at the end) : /data/users/{id}/homepage/{name} POST 0.084 /data/users/{id}/homepage/{type}/details/{name} POST 0.084   Current Query :     index="*myindex*" host="*abc*" host!=*ftp* sourcetype!=infra* sourcetype!=linux* sourcetype = "nginx:plus:access" | bucket span=1s _time| stats count by env,tenant,uri_path,request_method,_time       I need the uri_path to be grouped as per the API patterns I have.    1 option is to add 3000 regex replace statements, like the one blow, in the query for each API pattern, but that makes query too heavy to parse, I tried something like this, for a sample pattern /api/data/users/{id} :   |rex mode=sed field=uri_path "s/\/api\/data\/users\/([^\/]+)$/\/api\/data\/users\/{id}/g"    
How do I search for a complete list of all the Apps on my Deployment server ? If possible Excluding the Built In apps? Thank u in advance
Hi here is my log: 2020-01-19 13:20:15,093 INFO ABC.InEE-Product-00000 [MyProcessor] Detail Packet: M[000] T[111] P[0A0000] AT[00] R[0000] TA[ABC.OutEE-Product] Status[OUT-LOGOUT,EXIT] 2020-01-19 ... See more...
Hi here is my log: 2020-01-19 13:20:15,093 INFO ABC.InEE-Product-00000 [MyProcessor] Detail Packet: M[000] T[111] P[0A0000] AT[00] R[0000] TA[ABC.OutEE-Product] Status[OUT-LOGOUT,EXIT] 2020-01-19 13:36:08,185 INFO ABC.InEP-Product-00000 [MyProcessor] Detail Packet Lost: M[000] T[111] SA[ABC.InEE-Product]  R[0000]   what is the rex for SOURCE=ABC.InEE-Product TARGET=ABC.OutEE-Product Model=000 Tip=111 POD=0A0000   any idea? Thanks,
If I run this search I generate two numeric fields, one called number the other called decimal     | makeresults 1 | eval number = 7 | eval decimal = 7.0     When I choose to export this data... See more...
If I run this search I generate two numeric fields, one called number the other called decimal     | makeresults 1 | eval number = 7 | eval decimal = 7.0     When I choose to export this data as CSV there are quotes around decimal but not around number.  Is it possible to ensure that neither field has quotes when the CSV is downloaded?  
I have created a custom business transaction in one of my applications. Now I want to move those business transactions to another application (both are the same code base but different environment). ... See more...
I have created a custom business transaction in one of my applications. Now I want to move those business transactions to another application (both are the same code base but different environment). I tried the application import/export option but I have to do lots of changes. Is there any other way to move the custom business transactions?
An analyst adds a note to investigation. Another analyst from another shift delete this note. where is the audit trail that allows me to see when and who did what in an investigation ? According to... See more...
An analyst adds a note to investigation. Another analyst from another shift delete this note. where is the audit trail that allows me to see when and who did what in an investigation ? According to the doc : "Investigation details from investigations created in versions earlier than 4.6.0 of Splunk Enterprise Security are stored in two KV Store collections, investigative_canvas and investigative_canvas_entries. Those collections are preserved in version 4.6.0 but the contents are added to the new investigation KV Store collections. So to restore, you may need to restore investigation, investigation_attachment, investigation_event, investigation_lead, investigative_canvas, and investigative_canvas_leads." But except for the investigation KV store (| rest /services/storage/investigation/investigation) I can't access the other KV store . Is it a missing functionality ?   Thanks !      
Hello!  I'm trying to set an alert that let's me know if tasks in a specific queue pass a specific duration.  The search has been giving me issues.  I tried a transaction line, but I don't have a end... See more...
Hello!  I'm trying to set an alert that let's me know if tasks in a specific queue pass a specific duration.  The search has been giving me issues.  I tried a transaction line, but I don't have a endswith.  Does anyone know how to run a search like this? I'm trying something like: earliest=-30d@d index=[DATA] sourcetype=incident_history incident_type=[SPECIFIC QUEUE] event_type=[SPECIFIC ACTION (LIKE A TASK ON HOLD)] | transaction incident_id when startswith=[SPECIFIC ACTION (LIKE A TASK ON HOLD)] endswith= > 72h | table incident_id, duration | sort - duration It's not a transaction, but the only thing I could thing of.  What would be a search command forwhen an incident_id has been in a specific queue past a specific duration? Any help would be appreciated.
This article states how to change the TTL for a saved search individually: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2105/Search/Extendjoblifetimes I want to change the default TTL of any... See more...
This article states how to change the TTL for a saved search individually: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2105/Search/Extendjoblifetimes I want to change the default TTL of any and all saved searches. Otherwise, I and my team have to remember to change this for each new search we save.
Hi Here is my log, what is the rex for extract "0000A0@#0000" and "mymodulename"   2021-07-14 23:59:05,185 INFO [APP] User: 0000A0@#0000 || module: mymodulename   any idea? Thanks  
Every time I search, I get errors: Could not load lookup=LOOKUP-cisco_asa_change_analysis Could not load lookup=LOOKUP-cisco_asa_ids_lookup Could not load lookup=LOOKUP-cisco_asa_intrusion_severit... See more...
Every time I search, I get errors: Could not load lookup=LOOKUP-cisco_asa_change_analysis Could not load lookup=LOOKUP-cisco_asa_ids_lookup Could not load lookup=LOOKUP-cisco_asa_intrusion_severity_lookup Could not load lookup=LOOKUP-cisco_asa_severity_lookup How can this be fixed in Splunk Cloud  
Hey Guys We are trying to configure Splunk with S3 and facing issues :  Have a few questions : 1) what should be under  Configure the remote volume We have storageType:remote  what does [volume... See more...
Hey Guys We are trying to configure Splunk with S3 and facing issues :  Have a few questions : 1) what should be under  Configure the remote volume We have storageType:remote  what does [volume:s3] signify?  2) Do the entries below suffice ? storageType = remote path = s3://splunk-smartstore/indexes remote.s3.supports_versioning = false remote.s3.endpoint = http://<IP-address>/splunk-smartstore remote.s3.access_key = <Access_key> remote.s3.secret_key = <secrey key>   We keep seeing the following errors : /opt/splunk/etc/master-apps/_cluster/local]# /opt/splunk/bin/./splunk cmd splunkd rfs -- ls error: <remote_id> expected error: operation failed; check log for details What log file can help debugging this ?
Hi Folks, I am trying to enrich my search with subsearch in the same time bucket/bin. The search can be found below. Details: Main search: looking for 5 times or more failed login attempts fro... See more...
Hi Folks, I am trying to enrich my search with subsearch in the same time bucket/bin. The search can be found below. Details: Main search: looking for 5 times or more failed login attempts from an account/user. if login attempt get failed, userid doesn't show up, however if it can be successful on subsequent attempts, userid shows up in the logs. Subsearch : looking for username by using userid. this username will enrich main search's username field along with the userid.  Two complications: 1. userid is supposed to be unique, but not always, so both main search and subsearch should look for same time frame to create correct results.  2. sometimes subsearch could not find username due to the lack of successful login, in this case I want my main main search should show result without username or fill username with NULL or so. Note: not sure the following way is proper or not. but looks working without meeting second complication mentioned above.  Thanks,       index="useractivity" event=login response.login=failed | eval temp=split(userid, ":") | eval urole=mvindex(temp,5) | bucket _time span=15m | join type=inner userid [ search index="useractivity" | eval userid_tmp=split(userid, ":") | eval userid=mvindex(userid_tmp, 0), username=mvindex(userid_tmp, 1) | bucket _time span=15m | stats latest(userid) as userid by username ] | stats values(src_ip) values(event) count(event) as total by _time user urole userid username | where total >= 5
Hello! I  have a search with timechart that I need to filter time AFTER the timechart based on the current time.   I've tried: search blablabla | timechart span=1m limit=0 eval(sum(SOM)/sum(VO... See more...
Hello! I  have a search with timechart that I need to filter time AFTER the timechart based on the current time.   I've tried: search blablabla | timechart span=1m limit=0 eval(sum(SOM)/sum(VOL)) by VAR | where earliest=-3m@m latest=@m But got the error: Error in 'where' command: The operator at 'm@m latest=@m' is invalid. And: search blablabla | timechart span=1m limit=0 eval(sum(SOM)/sum(VOL)) by VAR | search earliest=-3m@m latest=@m But got no results.   Does anyone know how to to that? Thank you!  
We have a multi-site installation of Splunk and would like to test if the forwarder_site_failover is working properly. In the forwarders output.conf we have the following       [indexer_discovery... See more...
We have a multi-site installation of Splunk and would like to test if the forwarder_site_failover is working properly. In the forwarders output.conf we have the following       [indexer_discovery:master1] pass4SymmKey = $secretstuff$ master_uri = https://yadayada:8089 [tcpout:group1] indexerDiscovery = master1 useACK = false clientCert = /opt/splunk/etc/auth/certs/s2s.pem sslRootCAPath = /opt/splunk/etc/auth/certs/ca.crt [tcpout] forceTimebasedAutoLB = true autoLBFrequency = 30 defaultGroup = group1       As far as the yadayada clustermaster server goes, we have the following config:       /opt/splunk/etc/apps/clustermaster_base_conf/default/server.conf [clustering] (...) /opt/splunk/etc/apps/clustermaster_base_conf/default/server.conf forwarder_site_failover = site1:site2       One thing that I was trying to figure out was the need to explicitly set site2:site1 or if the existing configuration was enough for failing over from site1 to site2 and from site2 to site1. My approach was to shut the connection between the forwarder and the site1 indexers by setting iptable rules in the indexers that DROP the connections from the forwarder.       #e.g. iptables rule iptables -I INPUT 1 -s <forwarder ip> -p tcp --dport 9997 -j DROP #forwarder splunkd.log 07-15-2021 16:20:41.729 +0000 WARN TcpOutputProc - Cooked connection to ip=<site1 indexer ip>:9997 timed out       The iptables rules didn't make the forwarder failover so, i wonder if the failover process only kicks when the clustermaster loses the visibility over the indexers. In the live setup this seems more risky and less flexible. What is the recommended approach to perform this kind of testing?  
For non admin roles, when I navigate to User Web page "Account Settings" showing page not found. Is there way to allow certain roles to access the page?  My user role already have default capabilitie... See more...
For non admin roles, when I navigate to User Web page "Account Settings" showing page not found. Is there way to allow certain roles to access the page?  My user role already have default capabilities including change_own_password. Still not able to access "Account Settings". Thanks in advance. 
I want to fetch the availability report for all the Network devices that we have in our Data center. Requesting helping hands on this platform to help me formulating a query on Splunk tool. I m enc... See more...
I want to fetch the availability report for all the Network devices that we have in our Data center. Requesting helping hands on this platform to help me formulating a query on Splunk tool. I m enclosing the results that I have fetched from NNMi (Network Node Monitoring Performance Tool), I want similar results from Splunk as well (Node Availability %) Thanks & Regards, Sahil Vaishnavi
I've got a JSON event that I like to tabulate by using `index=myindex | table *` When I do this though it includes some system fields, such as `host`, `index`, `linecount`, `punct`, `source`, `sourc... See more...
I've got a JSON event that I like to tabulate by using `index=myindex | table *` When I do this though it includes some system fields, such as `host`, `index`, `linecount`, `punct`, `source`, `sourcetype` Does anyone know if there's a way to exclude them without naming them all individually via a built in method/variable? e.g. `index=myindex | fields - $SYSTEM_FIELDS$ | table *` Thanks, Henri
Hi I have file server that everyday backups of servers copy on that server on below path: /backup/files/ /backup/files/server1/$DATE.zip /backup/files/server2/$DATE.zip ...   How can I trigger... See more...
Hi I have file server that everyday backups of servers copy on that server on below path: /backup/files/ /backup/files/server1/$DATE.zip /backup/files/server2/$DATE.zip ...   How can I trigger this with Splunk: every day check that path and whenever one server not copy backup files, Splunk alert me. e.g. backup  file every night at 04:00 is ready, every morning at 07:00AM check that path and if find directory that has not have file that create today alert me.   Any idea? Thanks,
If we have logs being pushed to a text file stored on our drive, can Splunk monitor the content of these files and can we search the content of these files?