All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I have a requirement that i'm writing a join query. Query-1 returns  id ,time 5560007 2020-09-27 12:30:18.915   Query-2 returns  ID, time, status 5560007 2020-09-27 ... See more...
Hi Team, I have a requirement that i'm writing a join query. Query-1 returns  id ,time 5560007 2020-09-27 12:30:18.915   Query-2 returns  ID, time, status 5560007 2020-09-27 18:49:13.757 Completed 5560007 2020-09-27 18:49:11.862 ActivityCompletedNotification 5560007 2020-09-27 18:49:08.781 Activity 5560007 2020-09-27 18:44:02.812 ActivityInProgressNotification I'm using outer join to join both query-1 and query-2 and i would need the latest value i.e. Completed from the Query-2. Currently, when i write an outer join, its randomly picking the values from the query-2.  Query : index="xxxxxx" "5560007" | table id, _time | join type=outer id [ search index="xxxxx" "5560007" | table id, _time,notificationType | sort -_time] |table id,notificationType please help Many thanks!
Hello, I am new to Splunk and we want to monitor our VMware environment. We installed the VMware app and add-on from Splunk to get data in. This is working, but the reports and dashboards are not w... See more...
Hello, I am new to Splunk and we want to monitor our VMware environment. We installed the VMware app and add-on from Splunk to get data in. This is working, but the reports and dashboards are not what we need. It's too much and we only want some simple values. I tried to build my own searches, but it's hard for me to find the data that I need. Is there any explanation for how to work with the add-on data, or is there a better/simpler way to get the data from vSphere into Splunk? To start simple, I want a search to show me the system's overall CPU consumption from all hosts in the last 4 hours.
I have the a search  (picture below) which is calculating the open option interest on several ticker symbols.  I was able to figure out how to calculate the sum of the "Call/Put Open Interest" field ... See more...
I have the a search  (picture below) which is calculating the open option interest on several ticker symbols.  I was able to figure out how to calculate the sum of the "Call/Put Open Interest" field for each ticker, I can't figure out how to calculate the ratio between them. For example the Apple (AAPL) symbol should have a field with value 1.53. This was found by manually dividing the call open interest (1825974) by the put open interest (1193360). How can I create a new field that calculates this for me?    My search is: index=raw | eventstats max(_time) as maxtime | where _time=maxtime | stats sum(open_interest) as OI by ul_symbol, put_call | stats list(put_call) as "Option Type" , list(OI) as "Call/Put Open Interest", sum(OI) as "Total Open Interest" by ul_symbol | sort -"Total Open Interest" | rename ul_symbol as "Symbol" And an example of an event is: { [-] ask: 8.9 bid: 6.5 delta: -0.46 dte: 42 expiration_date: Nov 6 gamma: 0.02 high_price: 9.85 last: 7.85 low_price: 7.75 net_change: -2.59 open_interest: 6 percent_change: -24.83 put_call: PUT rho: -0.07 strike: 112 symbol: AAPL_110620P112 theta: -0.091 time_value: 7.85 ul_symbol: AAPL vega: 0.153 volume: 55 } I'm definitely new to all this. Appreciate the help!
I'd like to display up to a certain number of data points in a bar chart in a way that the rest of the items which do *not* get displayed are collapsed into one bar: "other". Is there a way to do thi... See more...
I'd like to display up to a certain number of data points in a bar chart in a way that the rest of the items which do *not* get displayed are collapsed into one bar: "other". Is there a way to do this? I'm aware that I can simply set the maximum number of data points to display, but this provides no information about the rest, which is what I'm trying to do.  I'm using the Splunjs framework to create the visualization in an app external to Splunk. 
Hi, I have following search where I'm searching for the common Plugin_ID between searches. However with the 'stats count by', i'm loosing other fields (Name, _time) that are important part of over a... See more...
Hi, I have following search where I'm searching for the common Plugin_ID between searches. However with the 'stats count by', i'm loosing other fields (Name, _time) that are important part of over all goal. End goal is to find common Plugin_ID between two searches, when it (Plugin_ID) was first detected (date) and number of days from when it was first detected.  index=main sourcetype="csv_nessus" Risk=High earliest=-180d@d latest=-35d@d AND [search index=main sourcetype="csv_nessus" Risk=High earliest=-35d@d latest=now | stats count by Plugin_ID | table Plugin_ID Name _time ] |chart count by Plugin_ID | table Plugin_ID, Name,  _time Please help me. Thanks, Bhagatdd
How do I to stop Splunk search head from exceeding data limit allowed by license. The search head is Splunk App for windows infrastructure and is indexing information from  AD Server and Win 10 work... See more...
How do I to stop Splunk search head from exceeding data limit allowed by license. The search head is Splunk App for windows infrastructure and is indexing information from  AD Server and Win 10 workstation.      
I have a Prod Linux server where I have deployed Universal forwarder and monitoring few directories. Say like below. This is a new server and I am integrating for first time on say Sep 8th. $ ls -la... See more...
I have a Prod Linux server where I have deployed Universal forwarder and monitoring few directories. Say like below. This is a new server and I am integrating for first time on say Sep 8th. $ ls -la rwxr-xr-x 5 yusufshakeel yusufshakeel 160 Sep 4 02:53 alarm_0409.log rwxr-xr-x 8 yusufshakeel yusufshakeel 256 Sep 5 02:53 alarm_0509.log rwxr-xr-x 2 yusufshakeel yusufshakeel 64 Sep 6 02:53 alarm_0609.log rwxr-xr-x 1 yusufshakeel yusufshakeel 10 Sep 7 02:53 alarm_0709.log rwxr-xr-x 2 yusufshakeel yusufshakeel 64 Sep 8 02:53 alarm_0809.log This is a new server and I am integrating for first time on say Sep 8th. Now the integration is completed and Splunk starts monitoring.  I see only Sep 8th file got indexed and all the previous 4 logs (Sep 4 - Sep 7) didn't get ingested. My question is,  will splunk not index older log files ;from the date it got integrated? Please help.
Hello, I have had an issue where specifically the firewall logs were cutoff for about 5 hours and then reconnected and started logging again in Splunk. The syslog server responsible is actually run... See more...
Hello, I have had an issue where specifically the firewall logs were cutoff for about 5 hours and then reconnected and started logging again in Splunk. The syslog server responsible is actually running and sending data, but how can I troubleshoot why the logs were not sent during that specific time period ? I am new to troubleshooting indexers etc. any help is appreciaited. Regards,
my Splunk logs looks like below. Total keys could change based on use case. I need to get exact number of keys from below data and then what is the max key count among those. Please guide me here. {... See more...
my Splunk logs looks like below. Total keys could change based on use case. I need to get exact number of keys from below data and then what is the max key count among those. Please guide me here. { level: INFO logger_name: com.123.logging process: NA requestId: 1234567 attribute: email criteria: value path: aa.bb.cc service_name: SERVICE_NAME thread_name: h1234567 timestamp: 2020-09-26T07:33:53.451Z }  
How can access all historical reports in splunk. My requirement is to prepare a visualization for last 30 days report data. 
Good Afternoon, I have a search query that returns the below |table Name, Date,ImageURL1,ImageURL2 The search could return up to 100 records. Currently, the ImageURL1 and ImageURL2 values are... See more...
Good Afternoon, I have a search query that returns the below |table Name, Date,ImageURL1,ImageURL2 The search could return up to 100 records. Currently, the ImageURL1 and ImageURL2 values are links to images hosted on an internal web server. How would I go about rendering these images in Splunk so that the images display in the search results / dashboard as opposed to the URL's?  
  Hi. I'm configuring a docker-compose responsible to start a cluster of an application and then Splunk and the universalforwarder. It is working, but I don't have any tag to inform from which con... See more...
  Hi. I'm configuring a docker-compose responsible to start a cluster of an application and then Splunk and the universalforwarder. It is working, but I don't have any tag to inform from which container the log came.   Is there any way to add a tag with the hostname?   Scenario: I have the docker-compose below, and I'll scale myapp to 3 instances. Each instance will receive a random hostname by docker-compose, but the path of the log for all instances is the same.   How can I add the myapp hostname as a tag to Splunk?   Because using the universalforwarder, the value of the field hostname for all logs is the hostname of the universalforwarder container, in my case splunkforwarder.     myapp: image: myapp/myapp:latest environment: - LOG_PATH=/opt/myapp/logs ports: - "8080" volumes: - log_volume_splunk:/opt/myapp/logs splunk: image: splunk/splunk:8.0 hostname: splunk container_name: splunk environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_USER=root - SPLUNK_ENABLE_LISTEN=9997 - SPLUNK_PASSWORD=password ports: - "8000:8000" splunkforwarder: image: splunk/universalforwarder:8.0 hostname: splunkforwarder container_name: splunkforwarder environment: - SPLUNK_START_ARGS=--accept-license --answer-yes - SPLUNK_STANDALONE_URL=splunk:9997 - SPLUNK_USER=root - SPLUNK_ADD=monitor /opt/myapp/logs - SPLUNK_PASSWORD=password restart: always depends_on: - splunk volumes: - log_volume_splunk:/opt/myapp/logs                
Greetings, I'm hosting a Splunk Deployment Server in a Kubernetes environment.  When I'm using one replica, connections are smooth.  But, when I use two or more, I get this error: PubSubSvr - sende... See more...
Greetings, I'm hosting a Splunk Deployment Server in a Kubernetes environment.  When I'm using one replica, connections are smooth.  But, when I use two or more, I get this error: PubSubSvr - sender=connection_<my_client> channel=tenantService/handshake Message not dispatched (connection invalid) The Server receives every phone home, but it doesn't do anything else. What are some of the causes for this?
Hello! I have a scheduled report that I have running monthly that exports my results into a PDF format that is emailed out. I would like to have it exported as a CSV instead but did not see that opt... See more...
Hello! I have a scheduled report that I have running monthly that exports my results into a PDF format that is emailed out. I would like to have it exported as a CSV instead but did not see that option. Is there a way to do this?   Thanks!
Hello everyone. How do I integrate consoles into the cloud? I want to integrate the cortex cloud console into splunk cloud is this possible?
When i run this query it seems to run just fine as an adhoc search but when i schedule it, it throws the following error [subsearch]: [subsearch]: [SERVER1] Search process did not exit cleanly, exi... See more...
When i run this query it seems to run just fine as an adhoc search but when i schedule it, it throws the following error [subsearch]: [subsearch]: [SERVER1] Search process did not exit cleanly, exit_code=-1, description="exited with code -1". Please look in search.log for this peer in the Job Inspector for more info.   Here's the query. The issue is definitely not space on the drive, there's plenty of space. Also, if I hard code subsearch search index=idx2 earliest=-30d@d latest=now, the scheduled search will work fine but then i would have to add some additional lines of SPL to ensure we are using only the latest pull to avoid duplicate data which takes a little longer to run as well.      index=myindex sourcetype="mysource1" [| metadata index=myindex type=sourcetypes | search sourcetype="mysource1" | eval earliest=relative_time(lastTime,"-1h@h") | table earliest] | table id1 field1 field2 field3 | join type=left field3 [ search index=idx2 [| metadata index=idx2 type=sourcetypes | search sourcetype="source2" | eval earliest=relative_time(lastTime,"-1h@h") | table earliest] | rename id as field3 | table field3,f4,f5,f6,f7]      
I'm trying to use timechart (which may be the wrong approach) to count events for each day that were "active" over a period of time.  For example the data would be: user session first_seen las... See more...
I'm trying to use timechart (which may be the wrong approach) to count events for each day that were "active" over a period of time.  For example the data would be: user session first_seen last_seen user1 137271 2020-09-13T00:39:40.079Z 2020-09-24T00:56:30.941Z user1 137264 2020-09-13T13:17:10.052Z 2020-09-25T13:19:37.342Z user1 137272 2020-09-13T13:48:24.513Z 2020-09-25T13:27:27.663Z user2 137272 2020-09-16T02:45:28.436Z 2020-09-24T13:21:27.215Z user2 137267 2020-09-18T13:03:01.847Z 2020-09-25T13:18:05.927Z user3 137272 2020-09-13T13:04:52.235Z 2020-09-25T13:07:02.422Z   Resulting in (for use in some ort of timechart like graph, or maybe even a bar chart): Date (x axis) Count (y axis) 2020-09-13 4 2020-09-14 4 2020-09-15 4 2020-09-16 5 2020-09-17 5 .... ... 2020-09-24 6 2020-09-25 4   One though that came to mind was creating then expanding a multivalue field with a value for each day in between the first and last dates.  Thought I'm not sure how to even accomplish that if doable.   I've also thought a couple time basing something on a calculated duration but may be challenging given the varying first and last times. Or maybe there's a precanned app out there I'm not finding?
If I gracefully shutdown the UF, it will send all logs from output queue and from internal parsing queue. Suppose I restart the UF after 1min, will it start sending logs from logs file where he had ... See more...
If I gracefully shutdown the UF, it will send all logs from output queue and from internal parsing queue. Suppose I restart the UF after 1min, will it start sending logs from logs file where he had left before shutdown???   Or will it start sending new logs which are getting appended independent of where had left off.   If in such scenarios logs are getting dropped, is there any way to detect how many such logs were dropped?  What may happen if UF is crashed, obviously it will drop queue logs but from where he would start once he is up and running??
I'm new to Splunk and I find Splunk reports confusing. In other SIEMS a report is the results of a previously ran query. However, it seems to be that reports are saved search queries without results... See more...
I'm new to Splunk and I find Splunk reports confusing. In other SIEMS a report is the results of a previously ran query. However, it seems to be that reports are saved search queries without results of previous runs. So, when I click a report name it seems to be rerunning the query and now showing results of a previous run. Are my assumptions & understand of reports correct?
Hello All, I am new to Splunk and JavaScript as well. I need help to auto-populate the dropdown list value based on other dropdown values selected. consider, I have 3 fields Country, State, City w... See more...
Hello All, I am new to Splunk and JavaScript as well. I need help to auto-populate the dropdown list value based on other dropdown values selected. consider, I have 3 fields Country, State, City which are interdependent on each other. I am able to populate a dynamic dropdown list(for 2 fields like State & City) on the selection of another dropdown (Country). Now I want to auto-populate values of State and Country on the selection of any City. Values should be directly displayed in the drop-down box. there should not be a need to select it from the dropdown. As for City, State & Country is going to be unique values. Thanks in Advance!!!