All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey Team I have events which contains a field "job_code".  index=default source=jobfeed  I have a lookup (jobs.csv) which has the list of allowed job codes.  jobCode jobDesc 000 EX 001... See more...
Hey Team I have events which contains a field "job_code".  index=default source=jobfeed  I have a lookup (jobs.csv) which has the list of allowed job codes.  jobCode jobDesc 000 EX 001 PT   My requirement is to generate an alert every day, If any of the jobCode available in lookup didn't show up at all on the events for past 2 days. For instance,  for past 2 days if splunk didn't receive event with job_code as 000 , then i need an alert. I need this check for all the jobCode in the lookup table. Can you please help me with a query for this? Thank you
Hi, can anyone make any suggestions as to how I can make this search more efficient?     index=prod_service_now sourcetype=snow:incident number=INC* | fields opened_at dv_assignment_group sys_id ... See more...
Hi, can anyone make any suggestions as to how I can make this search more efficient?     index=prod_service_now sourcetype=snow:incident number=INC* | fields opened_at dv_assignment_group sys_id | dedup sys_id |search dv_assignment_group=ITSOCS* NOT dv_assignment_group="ITSOCS Logistics" | eval now = now() | eval now = relative_time(now,"@w1") | eval now = relative_time(now,"-52w") | eval earliest = relative_time(now,"-52w") | eval _time = strptime(opened_at,"%Y-%m-%d%H:%M:%S") | where _time >= earliest AND _time < now | eval new_time = relative_time(strptime(opened_at,"%Y-%m-%d%H:%M:%S"), "+52w") | eval _time = relative_time(new_time,"@w1") | eval ReportKey = "LASTYEAR" | append [ search index=prod_service_now sourcetype=snow:incident number=INC* | fields opened_at dv_assignment_group sys_id | dedup sys_id | search dv_assignment_group=ITSOCS* NOT dv_assignment_group="ITSOCS Logistics" | eval now = now() | eval now = relative_time(now,"@w1") | eval earliest= relative_time(now, "-52w") | eval _time = strptime(opened_at,"%Y-%m-%d%H:%M:%S") | where _time >= earliest AND _time < now | eval _time = relative_time(strptime(opened_at,"%Y-%m-%d%H:%M:%S"), "@w1") | eval ReportKey = "CURRENTYEAR"] | chart count by _time, ReportKey       Thanks
Hello Splunkers!      Does anyone have some SPL for a report that will show me ALL attributes that are assigned to each role?   I'm trying to assess which of the roles listed in our Splunk environme... See more...
Hello Splunkers!      Does anyone have some SPL for a report that will show me ALL attributes that are assigned to each role?   I'm trying to assess which of the roles listed in our Splunk environment have the attribute "schedule_search" assigned to them.    Thank you.
Hi, What's the best way to filter a search against a set of unique id's in a subsearch? Currently, approaching it as such: <events to filter against subsearch ids> | join left subsearch_id ... See more...
Hi, What's the best way to filter a search against a set of unique id's in a subsearch? Currently, approaching it as such: <events to filter against subsearch ids> | join left subsearch_id  | [search subsearch] Though, it's returning a 1:1 set v. all primary search events that contain a matching id.
Hi all, I'm having non-indexed-extracted json in events. When there is a json "host" field host, which is different from the indexed "host", then the search view is showing you 2 values for host in ... See more...
Hi all, I'm having non-indexed-extracted json in events. When there is a json "host" field host, which is different from the indexed "host", then the search view is showing you 2 values for host in smart or verbose mode. you can't work with the searchtime extracted json host field - clicking on it gives you no results - as host is an indexed field.  .. when you are doing a ... | stats count by host, then only "indextimehost" is reported back - as expected.   this behaviour differers from "normal" kv searchtime detection:   i found multiple posts regarding this like: https://community.splunk.com/t5/Getting-Data-In/Duplicate-host-field-after-indexing-JSON-event/m-p/292472 unfortunately i'm not able to change the json field name at the source. Rewriting is also no good option for me.  This more looks like a display bug for me.. but drives the poweruser crasy.   best Regards, Andreas
AIX splunk starts successfully but doesn't actually start. This is new installation. Looks like it starts "nodaemon" ... and stale PID file at the beginning of start???   [cb-nimapp1:/opt/splunkfor... See more...
AIX splunk starts successfully but doesn't actually start. This is new installation. Looks like it starts "nodaemon" ... and stale PID file at the beginning of start???   [cb-nimapp1:/opt/splunkforwarder/bin]./splunk start splunkd 16187888 was not running. Removing stale pid file... done. Splunk> Winning the War on Error Checking prerequisites... Checking mgmt port [8089]: open Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-8.1.1-08187535c166-AIX-powerpc-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... 0513-059 The splunkd Subsystem has been started. Subsystem PID is 19071306. Done [cb-nimapp1:/opt/splunkforwarder/bin]ps -ef |grep splunk root 15991280 19071306 0 11:05:20 - 0:00 /bin/sh /opt/splunkforwarder/bin/pid_check.sh conf-mutator 16187888 root 19071306 6554088 3 11:05:18 - 0:00 splunkd --nodaemon -p 8089 _internal_exec_splunkd [cb-nimapp1:/opt/splunkforwarder/bin]ps -ef |grep splunk root 15008220 19071308 0 11:05:25 - 0:00 [splunkd pid=19071308] splunkd --nodaemon -p 8089 _internal_exec_splunkd [process-runner] root 19071308 6554088 15 11:05:23 - 0:00 splunkd --nodaemon -p 8089 _internal_exec_splunkd [cb-nimapp1:/opt/splunkforwarder/bin]ps -ef |grep splunk [cb-nimapp1:/opt/splunkforwarder/bin]
Hello, I'm working on a splunk alert that monitors processes. If a process has been running for a long time I want to retrieve the stage that process is on to include in the alert. Like "Process A h... See more...
Hello, I'm working on a splunk alert that monitors processes. If a process has been running for a long time I want to retrieve the stage that process is on to include in the alert. Like "Process A has been running for 2 hours and is currently on stage 'Load Library' and has been for 45 minutes".  The process name value in the subsearch is the same as the source value in the main search (with "console" appended to each).  The subsearch does return a table of the sources I want but the main search then makes a table of lots of sources that I don't want:  sourcetype="text:jenkins" | where source in (source, [search index=jenkins_statistics event_tag="job_event" node="*" job_name="*" build_number="*" earliest=-2h@h latest=now() |dedup host build_url sortby -_time | search (type="started" ) `utc_to_local_time(job_started_at)` | convert timeformat="%Y-%m-%d %H:%M:%S" mktime(job_started_at) as epocTime | eval job_duration = if(isnull(job_duration), now() - epocTime, job_duration) | eval Duration = tostring(job_duration,"duration") | eval job_result=if(type="started", "INPROGRESS", job_result) | eval ExceededLimit = if(Duration > "01:00:00", "Limit Met", "Limit Not Met") | eval source = if(type="started", "\""+build_url+"console\"",null()) | table source]) | stats values(_raw) as Raw by _time,source |eval Stages = if(like(Raw, "[Pipeline] { (%)"),trim(substr(Raw,15),")"),null()) | sort - _time | stats values(source) as Source, values(Stages) as Stages, values(Raw) as Raw, values(_time) as Time by source | table source, Stages  For instance the subsearch table is currently 3 processes/sources long but the main search table is 81 processes/sources and doesn't include any of the sources the subsearch returned. 
Seeing an issue in our environment - we have a search head cluster and the captain throws the following error message: Problem replicating config(bundle) to search peer ....... Upload bundle ...... ... See more...
Seeing an issue in our environment - we have a search head cluster and the captain throws the following error message: Problem replicating config(bundle) to search peer ....... Upload bundle ......  url ...... failed; error="Broken pipe" Also, when I run a search from the captain, it says "Unable to distribute to peer because replication was unsuccessful" If we switch captains the error then appears on the new captain. Searches on the other search heads in the cluster work fine.
Hello all, Serious newbie to Splunk here. I have been tasked with trying to identify traffic and create rules to either allow or block traffic coming and going from our company IT lab.  I installed ... See more...
Hello all, Serious newbie to Splunk here. I have been tasked with trying to identify traffic and create rules to either allow or block traffic coming and going from our company IT lab.  I installed a Cisco ASA 5525 in transparent mode and now they want me to start locking it down. Thus I need to ID the traffic. I could really use the help on how to go about this with Splunk. I thought maybe trying to use netflow to do this but do not seem to be having much luck with getting it running on our Splunk install. I do not even know if I am doing that correctly at this point.. LOL..  One thing of note, as this is a lab, I have been told there is a budget of $0 for this. Thank you in advance for all of your help!
Hi,  I have a dashboard with input panels in it, When I wanted to schedule a PDF delivery for this dashboard I saw that the option is grayed out.  Is there any way to override it to enable the del... See more...
Hi,  I have a dashboard with input panels in it, When I wanted to schedule a PDF delivery for this dashboard I saw that the option is grayed out.  Is there any way to override it to enable the delivery? Thanks.
Hi  , In our current Splunk infrastructure , indexes are enabled with smart store and indexers are clustered. Now our local storage is almost 80% full.  When further validating ,i noticed that a pa... See more...
Hi  , In our current Splunk infrastructure , indexes are enabled with smart store and indexers are clustered. Now our local storage is almost 80% full.  When further validating ,i noticed that a particular index which is enabled with smart store stores the entire warm buckets in the indexers(local store). But according to my understanding only a partial of warm buckets will have a local copy and others hast to evicted right?   Could some one please help with the troubleshooting steps to ensure whether the smart store is properly configured   TIA
Hello everyone ! I have a specific question, i'm on the 8.0.6 version of splunk. I want to make pdf of my dashboard, but when i export pdf or imprim the web page, it's not rendering as the same way... See more...
Hello everyone ! I have a specific question, i'm on the 8.0.6 version of splunk. I want to make pdf of my dashboard, but when i export pdf or imprim the web page, it's not rendering as the same way than the dashboard. How can i make an automatic sexy pdf ?
Hi all,  My data is logging of support ticket. i retrieved all the change state of each ticket with the transaction command. This command output a list of status for each ticket id.  What I'm tryin... See more...
Hi all,  My data is logging of support ticket. i retrieved all the change state of each ticket with the transaction command. This command output a list of status for each ticket id.  What I'm trying to do is to compare each pair of Ticket status in order to create a state for management.  I have this matrix explaining states for all pair of values :           A                B A  Value1    Value2 B  Value3     Value4 To accomplish this work, I'm using case statement :    state=case(match(mvindex(status, 0), "^A$") AND match(mvindex(status, 1), "^A$"), "Value1", match(mvindex(status, 0), "^A$") AND match(mvindex(status, 1), "^B$"), "Value2", match(mvindex(status, 0), "^B$") AND match(mvindex(status, 1), "^A$"), "Value3", match(mvindex(status, 0), "^B$") AND match(mvindex(status, 1), "^B$"), "Value4")   For sure my real matrix is much bigger than this. It's why I'm searching an other way to do it.  My expected result is a table like this :    Event         Status        State     1              A, B, A          Value2, Value3     2              A, A, B          Value1, Value2     3              B, B, A          Value4, Value3     4              B, A, B          Value3, Value2   regards, clement    
Is it possible cluster master returns fqdn instead of IP's of Indexers ? if yes please explain how ? 
Hi, I don't know if the Jenkins Trigger app is still being maintained. If so, it would be nice, if the 'Build Authorization Token Root' Jenkins plugin  could be supported. Unfortunately the build ... See more...
Hi, I don't know if the Jenkins Trigger app is still being maintained. If so, it would be nice, if the 'Build Authorization Token Root' Jenkins plugin  could be supported. Unfortunately the build URL of the plugin is not compatible with what the Jenkins Trigger app is supporting currently. The advantage of the  'Build Authorization Token Root'  plugin is that it allows builds with authentication tokens without needing Job/Read permissions. 
I verified some in my non-production environment. Then ClusterMaster was broken and hasn't worked, maybe since same time, 2 Indexers have been looked not to be as well. For recovery I tried some, f... See more...
I verified some in my non-production environment. Then ClusterMaster was broken and hasn't worked, maybe since same time, 2 Indexers have been looked not to be as well. For recovery I tried some, finally both of Indexers can't show web. To up web port and service, I put "#/opt/splunk/bin/splunk set web-port 8000" Indexer replies  "Couldn't complete HTTP request: Connection reset by peer"   I tried to reboot platform and restart Splunk service and so on, but this isn't resolved. Please help.
I have tested the S3 compatibility of MinIO with the tool(https://github.com/splunk/s3-tests). Some of the tests failed. Are these errors and failures matters? As far as I know, MinIO should support ... See more...
I have tested the S3 compatibility of MinIO with the tool(https://github.com/splunk/s3-tests). Some of the tests failed. Are these errors and failures matters? As far as I know, MinIO should support completely Splunk enterprise. s3tests.functional.test_headers.test_object_create_bad_md5_invalid_short ... FAIL s3tests.functional.test_headers.test_object_create_bad_contentlength_none ... FAIL s3tests.functional.test_headers.test_object_create_bad_authorization_empty ... FAIL s3tests.functional.test_headers.test_bucket_create_bad_authorization_empty ... FAIL s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2 ... FAIL s3tests.functional.test_headers.test_bucket_create_bad_date_invalid_aws2 ... FAIL s3tests.functional.test_s3.test_bucket_list_maxkeys_none ... FAIL s3tests.functional.test_s3.test_bucket_list_return_data ... FAIL s3tests.functional.test_s3.test_list_multipart_upload ... FAIL s3tests.functional.test_s3.test_bucket_acls_changes_persistent ... FAIL s3tests.functional.test_headers.test_object_create_bad_md5_invalid_short ... FAIL s3tests.functional.test_headers.test_object_create_bad_contentlength_none ... FAIL s3tests.functional.test_headers.test_object_create_bad_authorization_empty ... FAIL s3tests.functional.test_headers.test_bucket_create_bad_authorization_empty ... FAIL s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2 ... FAIL s3tests.functional.test_headers.test_bucket_create_bad_date_invalid_aws2 ... FAIL s3tests.functional.test_s3.test_bucket_list_maxkeys_none ... FAIL s3tests.functional.test_s3.test_bucket_list_return_data ... FAIL s3tests.functional.test_s3.test_list_multipart_upload ... FAIL s3tests.functional.test_s3.test_bucket_acls_changes_persistent ... FAIL  
Hi, I am working on a project for a client to implement Splunk as a primary logging platform. I have designed the solution to use a multisite cluster across two aws regions. I am struggling to get h... See more...
Hi, I am working on a project for a client to implement Splunk as a primary logging platform. I have designed the solution to use a multisite cluster across two aws regions. I am struggling to get how i can deploy smartstore in the two regions, which basically will have ind in region A connect to smartstore and ind in region B conn to smartstore in region B. Can you provide an example of this type of configuration. Regards Ravi
Hi folks,  Quick question, but I'm running out of ideas. I have a dashboard where I compare results between US and EU, one of the filters is "side" where I differentiate between EU and US.  When I... See more...
Hi folks,  Quick question, but I'm running out of ideas. I have a dashboard where I compare results between US and EU, one of the filters is "side" where I differentiate between EU and US.  When I select EU results, I'd like to be able to click on a result and it'll open up a new window with details of how that result was achieved (it's from another website) Same scenario  for when i click on the Side=NA, it leads me to a different website. I tried this, but as far as the EU part work, the US does not...    <drilldown> <condition match="$side$ = EU"></condition> <condition> <link target="_blank">/app/SplunkEnterpriseSecuritySuite/correlation_search_edit?search=$row.rule_name$</link> </condition> <condition match="$side$ = US"></condition> <condition> <link target="_blank">www.youtube.com</link> </condition> </drilldown>   Any hints or ideas?    Thanks,  Klaudia 
We've set up some Intelligence Downloads. These are downloading files from repository, on which they are upkept concerning retention (the available file is always up to date, so old entries get remov... See more...
We've set up some Intelligence Downloads. These are downloading files from repository, on which they are upkept concerning retention (the available file is always up to date, so old entries get removed). Since we'd like to have the same intelligence in Splunk that keeps up with it, we've set retention (Maximum age) on downloads to lowest possible -1d and interval is set at 1800. Issue seems to be that the downloads do not refresh the time, for example in ip_intel lookup, so the retention clears the still relevant IOCs, even when the files are successfully downloaded every 30 minutes. After being deleted these don't reappear on the next download either Simply disable/enable on the downloads makes all of them work for one time, but after 24h most gets removed again as the time in collection doesn't refresh on every download. Can't find any errors from anywhere and around 30% of the downloaded files seem to work a bit better (being added at least sometimes during the 24 hour period, but still not every 30 minutes) Settings and naming convention (no spaces) for all downloads is the same Threat Intelligence Audit doesn't show any errors. Based on it the lists do get downloaded every 30 minutes, for example status="threat list downloaded" file="/opt/splunk/var/lib/splunk/modinputs/threatlist/fqdn_critical.txt" bytes="1514" What are some other places to look for errors? or is this somehow expected behavior, let's say if the downloaded file is exactly the same as previously it doesn't process it? Expected behavior: - Every 30 minutes every line in the downloaded file is refreshed to related intel lookup and to Threat Artifacts Current behavior: - Some of the threatlists get sometimes refreshed, most only work one time when disabling and re-enabling the download from Intelligence Downloads