All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,    I am using Splunk Cloud and would like to configure a universal forwarder in a VM on a non-domain joined laptop. The goal is to run attacks and malware samples. As such, I will be using ... See more...
Hi all,    I am using Splunk Cloud and would like to configure a universal forwarder in a VM on a non-domain joined laptop. The goal is to run attacks and malware samples. As such, I will be using a VPN to mask my IP address which will not be associated with my cloud instance. My company has whitelisted IPs for access to the console.  Will I be able to configure this or will the cloud firewall not allow logs to be ingested from a non company IP address?  Thanks!
Hey all, Just started learning Splunk this week, interesting so far. How can I sort the top header from lowest to highest? Attached an example of what I'm working with below. Just want to organise... See more...
Hey all, Just started learning Splunk this week, interesting so far. How can I sort the top header from lowest to highest? Attached an example of what I'm working with below. Just want to organise it.  
I am running Splunk 8.1.0.1 on Windows Server 2016. The KVStore keeps failing when I start up Splunk service. This causes Splunkd server to fail after some time causing the need to restart it to acce... See more...
I am running Splunk 8.1.0.1 on Windows Server 2016. The KVStore keeps failing when I start up Splunk service. This causes Splunkd server to fail after some time causing the need to restart it to access the Splunk GUI. Are there any logs I should gather to identify what the issue is? I have read through some forums and tried the "stop Splunk, move server.pem file, start Splunk" to generate a new server certificate, but I am still getting the KVStore failure. Any help would be greatly appreciated as I am at a loss at this point.
People that have been on cloud for a while what was your onboarding like and how difficult is it to add new products? I got my trial account to see what the product can do and basically so far the an... See more...
People that have been on cloud for a while what was your onboarding like and how difficult is it to add new products? I got my trial account to see what the product can do and basically so far the answer is "nothing". I get that its a trial but I literally can do nothing with it beside login and look around. I installed the addons I was interested in which was the reason for the trial but that is pointless because you have to restart some services that i don't have access to. Since it's a trial I'm not entitled to support portal to ask for this so I called and I get a loop between it being a trial to press this for support and that is really sales who sends me back to support. I tried setting up the universal forwarder because I can't just point logs at the cloud instance for some reason and those instructions are so convoluted and link to each other in this nested way I feel like I'm dealing with Nvidia GRID. So I go online looking for some instructions from 3rd parties and I got to say no one appears to use Splunk cloud and honestly I understand why. They have way more control and don't need to use the universal forwarders.  I though the cloud would be simple, there is nothing to install but if i don't have a way to restart my instance and its a requirement. I emailed support pleading my case and i managed to get the forwarder installed but I have no way to check this. Honestly the product looks really promising and has a learning curve which i expect and would find my way around but i don't think cloud is going to be a good fit.  
Hello Experts,    Kindly help to filter out latest one year date for the particular field.  For ex:  index="abc" sourcetype="xyz"  |table ID, COMPLETION_DATE, LEARNING_ITEM_ID, LEARNING_ITEM_TITL... See more...
Hello Experts,    Kindly help to filter out latest one year date for the particular field.  For ex:  index="abc" sourcetype="xyz"  |table ID, COMPLETION_DATE, LEARNING_ITEM_ID, LEARNING_ITEM_TITLE, TARGET_DATE Here I just need to filter out who has completed within last one year in the completion date . Actually, Completion date showing for last five years .. But I just need to filter out only for past year without mentioning any date in query. I am wondering if we can use latest command .. Kindly help    
I want to look for requests in a service mesh ingest log which have no corresponding application log entries. My first search is  index=kubernetes source=*envoy-proxy* (api.foo.com OR info) AND ... See more...
I want to look for requests in a service mesh ingest log which have no corresponding application log entries. My first search is  index=kubernetes source=*envoy-proxy* (api.foo.com OR info) AND downstream_remote_disconnect | rex field=_raw "\[[^\]]+\] \"(?<downstream>[^\"]+)\".*\"(POST|GET) \"(?<host>[^\"]+)\" \"(?<path>[^\"\?]+)[\?]?\" [^\"]+\" (?<status>\d+).*\"(?<id1>[0-9a-f]{8})-(?<id2>[0-9a-f]{4})-(?<id3>[0-9a-f]{4})" | eval id=id1.id2.id3 | fields id my second search is  index=kubernetes source=*proxy* operation: | rex field=_raw "span_id:(?<id>[0-9a-f]{16});" | fields id and the obvious way of combining them yields no results index=kubernetes source=*envoy-proxy* (api.foo.com OR info) AND downstream_remote_disconnect | rex field=_raw "\[[^\]]+\] \"(?<downstream>[^\"]+)\".*\"(POST|GET) \"(?<host>[^\"]+)\" \"(?<path>[^\"\?]+)[\?]?\" [^\"]+\" (?<status>\d+).*\"(?<id1>[0-9a-f]{8})-(?<id2>[0-9a-f]{4})-(?<id3>[0-9a-f]{4})" | eval id=id1.id2.id3 | fields id | search NOT [ search index=kubernetes source=*proxy* operation: | rex field=_raw "span_id:(?<id>[0-9a-f]{16});" | fields id ]
Hi Team,    Need your help in creating regex to create a field.  "User_Claim":("sub":"qweihaytej"; "login_id":"Abc@domain.com";........)  Here User_Claim is a field. I have to create a field for ... See more...
Hi Team,    Need your help in creating regex to create a field.  "User_Claim":("sub":"qweihaytej"; "login_id":"Abc@domain.com";........)  Here User_Claim is a field. I have to create a field for login_id. I have tried with this, and it's not working.  ..... | rex field=User_Claim " login_id"(? <loginID>\w+.) " I am unable to see the field name in the interesting fields.    Please suggest in this.    Thanks Sagar      
Hello all, I need the installer or zip file for Splunk Universal Forwarder 8.2.3.  I can only find 8.2.2.1 or older or 8.2.4.   Thanks!
Hey all, I've got an interview and I need to show some level of competency at using Splunk, I'm doing a short presentation on it and I have used it a little. I know it organises a lot of data from... See more...
Hey all, I've got an interview and I need to show some level of competency at using Splunk, I'm doing a short presentation on it and I have used it a little. I know it organises a lot of data from logs into useful information and it's handy for forensics, security and auditing users - I'm sure much more as well. My task is this, to run Splunk on my computer and monitor  operating system events and/or performance. I did monitor data from the source called "Local Event Logs" and picked Security, Application, System and Setup and I have had a quick look over them but something is bugging me.  How can I make this more interesting because I'm doing  a presentation on it? Is there a field or something that would be good to talk about? There's so many options so it's a bit tough to pick or a find a good one. Odd question, I know but any suggestions would be appreciated. Thank you for the read guys. 
Hi there! I have a server that will be down for sometime, and I would like to not be inundated with "missing forwarder" alerts.  Is there a way to "pause" that alert for just that server?   Thanks... See more...
Hi there! I have a server that will be down for sometime, and I would like to not be inundated with "missing forwarder" alerts.  Is there a way to "pause" that alert for just that server?   Thanks in advance!
Tab1 Tab2 Tab3 Tab4 Tab5 _time 200 200 200 200 200 timestamp value 200 200 200 200 200 timestamp value   the above data i'm getting from an index which has json data afte... See more...
Tab1 Tab2 Tab3 Tab4 Tab5 _time 200 200 200 200 200 timestamp value 200 200 200 200 200 timestamp value   the above data i'm getting from an index which has json data after writing  index = xyz | table Tab*,_time Tab1 Tab2 Tab3 Tab4 Tab5 _time reltime 200 200 200 200 200 timestamp some hours ago the above table i'm getting after adding | sort - _time | head 1| reltime   to the above query then i'm doing transpose column_name=Application_list | rename 'row 1' as Status  | eval status = if((Status=200),"up","down) is there any way to retain the reltime column to the above table after transposing it because i'm getting below output? application_list Status Tab1 up Tab2 up Tab3 up Tab4 up Tab5 up reltime down _time down   and i want it like application_list Status reltime Tab1 up x hours ago Tab2 up x hours ago Tab3  up x hours ago Tab 4 up x hours ago Tab 5 up x hours go     below is the whole query: index = xyz | table Tab*,_time | sort - _time | head 1| reltime | transpose column_name=Application_list | rename 'row 1' as Status  | eval status = if((Status=200),"up","down)
I have an index with name applications which contains list of applications: app1 app2 . . . app10 I have created a simple dashboard showing count of events for app1. Is there a way to add pane... See more...
I have an index with name applications which contains list of applications: app1 app2 . . . app10 I have created a simple dashboard showing count of events for app1. Is there a way to add panels using a loop so that I don't have to copy the panel xml code 10 times, once for each app?   <?xml version="1.0" encoding="UTF-8"?> <dashboard> <label>Demo Apps</label> <row> <panel> <single> <title>app1</title> <search> <query>index=appindex (app1) | stats count</query> <earliest>-24h@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>    
I have a correlation search created.  However, I want to exclude files from being alerted upon.  I have an lookup file created that has a list of files to be excluded, however when I call that lookup... See more...
I have a correlation search created.  However, I want to exclude files from being alerted upon.  I have an lookup file created that has a list of files to be excluded, however when I call that lookup file to exclude the files, the search results will exclude the whole host and affected files, not just the singular file I want excluded.  My tstats search: | tstats values(Symantec_ICDX.device_public_ip) values(Symantec_ICDX.user_name) values(Symantec_ICDX.file.name) values(Symantec_ICDX.threat.name) values(Symantec_ICDX.type) values(Symantec_ICDX.file.sha2) as Symantec_ICDX.file.sha2 values(Symantec_ICDX.file.path) from datamodel=Symantec_ICDX by Symantec_ICDX.device_name | rename Symantec_ICDX.device_name as dest, values(Symantec_ICDX.device_public_ip) as dest_ip, values(Symantec_ICDX.user_name) as user, values(Symantec_ICDX.file.name) as file_name, values(Symantec_ICDX.threat.name) as threat_name, values(Symantec_ICDX.type) as type, Symantec_ICDX.file.sha2 as file_hash, values(Symantec_ICDX.file.path) as file_path Results of tstat search:   New tstats search with putting in ruby.exe into the lookup file. | tstats values(Symantec_ICDX.device_public_ip) values(Symantec_ICDX.user_name) values(Symantec_ICDX.file.name) values(Symantec_ICDX.threat.name) values(Symantec_ICDX.type) values(Symantec_ICDX.file.sha2) as Symantec_ICDX.file.sha2 values(Symantec_ICDX.file.path) from datamodel=Symantec_ICDX by Symantec_ICDX.device_name | rename Symantec_ICDX.device_name as dest, values(Symantec_ICDX.device_public_ip) as dest_ip, values(Symantec_ICDX.user_name) as user, values(Symantec_ICDX.file.name) as file_name, values(Symantec_ICDX.threat.name) as threat_name, values(Symantec_ICDX.type) as type, Symantec_ICDX.file.sha2 as file_hash, values(Symantec_ICDX.file.path) as file_path | search NOT [| inputlookup exclusions.csv | fields file_name] | search dest=COFGOOPAL2572TW   Results:   Lookup file:    
I have a strange issue where when I run a tstats query against a data model for the last 7 days in smart mode, 24million events are searched. When I run that same search for the last 30 days, only 95... See more...
I have a strange issue where when I run a tstats query against a data model for the last 7 days in smart mode, 24million events are searched. When I run that same search for the last 30 days, only 950k events are searched. This means less results are returned upon completion when last 30 days is selected. Anyone why less events are searched when I expand the time range to last 30 days?
Hello, Anyone else having trouble posting on Answers? Thanks and God bless, Genesius
I'm using the .NET Agent for Windows. It seems the logs are hardcoded to go to "%ProgramData%\AppDynamics\DotNetAgent\Logs" even after redirecting them using the AppDynamicsAgentLog.config, how can I... See more...
I'm using the .NET Agent for Windows. It seems the logs are hardcoded to go to "%ProgramData%\AppDynamics\DotNetAgent\Logs" even after redirecting them using the AppDynamicsAgentLog.config, how can I send them where I actually want them to be?
Hello, We have a clustered environment which collects 2000gb+/day with indexes.conf settings below and the rest of the settings is default. When does the frozenTimePeriodInSecs starts its count? Is... See more...
Hello, We have a clustered environment which collects 2000gb+/day with indexes.conf settings below and the rest of the settings is default. When does the frozenTimePeriodInSecs starts its count? Is it when the data is in the hot, warm or cold buckets? When will the buckets roll from hot to warm, and from warm to frozen in my case? Is it after 90 days since MaxHotSpanSecs default is 90 days? What is the approximate retention time for data with this config? And the maxWarmDBCount = 4294967295 seems really high in this case. See config below: [index_name] homePath = volume:hot_warm/index_name/main/db coldPath = volume:cold/index_name/main/colddb thawedPath = /opt/splunk/indexes/index_name/main/thaweddb maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 31104000 maxDataSize = auto_high_volume maxTotalDataSizeMB = 4294967295 repFactor = auto   Thanks in advance!
Hello, We have multiple Cisco Switches that are configured to send logs to Splunk.  When comparing the logs on the switch and the logs in Splunk, they do not match up.  Splunk does not seem to catch... See more...
Hello, We have multiple Cisco Switches that are configured to send logs to Splunk.  When comparing the logs on the switch and the logs in Splunk, they do not match up.  Splunk does not seem to catch all of the logs, and seems to miss entries in large chunks, and it does not seem to be any single type of entry.   I've searched by the IP of the switch and the information in the log thinking that it might have been mislabeled, but it is not in Splunk at all. We have our switches set up to log at an informational level.  This is happening across most switches in our environments - not all logs are entering Splunk.   Is this is a known issue? Thanks!
Hello i have a list of events and a fields called ClientDateTime  i want to show the events the their ClientDateTime is 5s between one event to another how can i do it ?
Hi all, Is it possible to configure universal forwarder in one machine that collect logs from all other domain machines rather than installing UF on each machines,   Thanks.