All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I have the following search: | mstats avg(*) as * WHERE index=indexhere host=hosthere span=1 by host |timechart span=1m latest(*) as * What i am trying to do is only show the fie... See more...
Hello! I have the following search: | mstats avg(*) as * WHERE index=indexhere host=hosthere span=1 by host |timechart span=1m latest(*) as * What i am trying to do is only show the fields that contains the word "read" somewhere in the field name. Each field name is different and doesn't have "read" in the same place or before/after the same special characters either. I have tried fixing with with different commands but can't seem to find a good solution.  Thanks in advance
Hi  Can you please let me know how we can find the difference of time between 2 timestamp fields. For example, 2 timestamp fields are in the below format:  Reception_Time = 06/21/2024 08:58:00.... See more...
Hi  Can you please let me know how we can find the difference of time between 2 timestamp fields. For example, 2 timestamp fields are in the below format:  Reception_Time = 06/21/2024 08:58:00.000000  Processing_End_Time = 06/21/2024 09:52:55.000000   Query :  (index="events_prod_gmh_gateway_esa") SPNLDSCR2406210858000001000 | spath Y_CONV | search Y_CONV=CACAFORM| spath ID_FAMILLE | search ID_FAMILLE=CAFORM |eval Time_in = "20" + substr(sRefInt , 9 , 15) |eval Processing_Start_Time = strptime(HdtIn,"%Y%m%d%H%M%S.%q") , Processing_End_Time = strptime(HdtOut,"%Y%m%d%H%M%S.%q") , Reception_Time = strptime(Time_in,"%Y%m%d%H%M%S.%q") |convert ctime(Processing_Start_Time) , ctime(Processing_End_Time) , ctime(Reception_Time) | table _time , ID_FAMILLE , MSG_TYP_CONV , MSG_TYP_ORIG , sRefInt , Reception_Time , Processing_Start_Time , Processing_End_Time
Hi Splunkers, currently we are managing an Enterprise Splunk environment previously managed by another company. As sadly often occurs, no documentation has been released and so we had to discover alm... See more...
Hi Splunkers, currently we are managing an Enterprise Splunk environment previously managed by another company. As sadly often occurs, no documentation has been released and so we had to discover almost information about architecture by ourselves. We successfully managed many tasks related to this big problem, but few ones remain; in particular, the one for what I open this discussion. The point is this: almost total ingested data are collected flowing to a couple of HF. This means that data flow is, typically: Log sources -> On prem HF -> Cloud HF (on IaaS VM) -> Cloud Indexer (on IaaS VM). With a search discovered here on community, based on internal logs, I found how to understand what Splunk component send data to another Splunk one. I mean: suppose I have HF on prem 1 -> Hf on cloud 2 I know how to discover this analyzing the internal logs. But what about if I want to discover which HF on prem collect data sent to a specific index? Let me do an example. Suppose I have this host set: Log sources (with NO UF installed on them) Log source 1 Log source 2 Log source 3 On prem HF HF on prem 1 HF on prem 2 HF on prem 3 On cloud HF (IaaS VM) HF on Cloud 1 On cloud indexer Indexer on cloud 1 (IaaS VM) Indexes index1 index2 index3 At starting point, I know only that all 3 On prem HF collect data and send them to HF on Cloud: then, data are sent to the Indexer. I don’t know which On prem HF collect data from which Log source, and in which index data are collected once they arrive on indexer; for sure, I could ask to system owner what configuration has been performed on log sources, but the idea is to discover this with a Splunk Search. Is this possible? The idea is to have a search where I can specify the exact flow. For example, suppose that 1 of the above flow is: Log source 1 -> On Prem HF 2 -> On Cloud HF -> On Cloud Indexer -> index3 I must be able to discover it.        
Hi Splunk SMEs, Good day, we face an issue after some deployment in splunk and we cannot connect now to Splunk HF DB Task Server. Initially is working fine, we have done upgrade in java version from... See more...
Hi Splunk SMEs, Good day, we face an issue after some deployment in splunk and we cannot connect now to Splunk HF DB Task Server. Initially is working fine, we have done upgrade in java version from coretto to zulu last month and its seem working fine. After some deployment it now cause the issue. Can anyone assist me and solve this.   Thanks Mel
index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main" | rex \"status\":"(?<Http_code>\d+)" | rex \"evtType\":"\"(?<evt_type>\w+)"\... See more...
index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main" | rex \"status\":"(?<Http_code>\d+)" | rex \"evtType\":"\"(?<evt_type>\w+)"\" |search evt_type=REQUEST| stats count(eval(Http_code>0)) as "Totalhits" count(eval(Http_code <500)) as "sR"| append [ search index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main"| rex field=_raw "Status code:"\s(?<code>\d+) |stats count(eval(code =500)) as error]   Hi All I want to add error count in to Totalhits like eval TotalRequest = error+TotalHits It is showing as null value. Please help me to achieve this
Still it find me difficult to understand logic of joining two indexes. Below the query which is almost suits my needs ... ALMOST index="odp" OR index="oap" txt2="ibum_p" | rename e as c_e | eval ... See more...
Still it find me difficult to understand logic of joining two indexes. Below the query which is almost suits my needs ... ALMOST index="odp" OR index="oap" txt2="ibum_p" | rename e as c_e | eval c_e = mvindex(split(c_e, ","), 0) | stats values(*) by c_e line 1 - two indexes joined and one of them filtered ( to create OneToOne relation). line 2&3 - rename and modification of key column in second index to make it identical as in the first index line 4 - show all columns Result contains 400 records - same as each index separately. But result shows only columns from second index . I supposed values(*) means all columns from all indexes. I tried to type each column separately but it does not change anything - still columns from first index are empty - WHY?? If I succeed this milestone   I will start aggregations   Any hints ?
Hello everyone, I'm new to Splunk and I have a question: is it possible to update the custom reporting command code without restarting Splunk? "After modifying configuration files on disk, you need ... See more...
Hello everyone, I'm new to Splunk and I have a question: is it possible to update the custom reporting command code without restarting Splunk? "After modifying configuration files on disk, you need to restart Splunk Enterprise. This step is required for your updates to take effect. For information on how to restart Splunk Enterprise, see Start and stop Splunk Enterprise in the Splunk Enterprise Admin Manual." I mean... How can I debug my app if I have to reload the Splunk every time I changed something?
How to best choose time-range to handle the delayed events for Splunk alerts to ensure that no events got skipped and no events are repeated effectively.
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also ... See more...
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also the warm ones). We managed to restore a backup of one of the old RHEL7 peers and we connected this to the cluster, but it looks like it's not replicating the cold buckets to the RHEL9 peers.. We are not using smart storage, the cold buckets are in fact just stored in another subdir under the $SPLUNK_DB path. So.. the question rises... are warm and cold buckets replicated ? Our replication factor is set to 3 and I added a single restored peer to a 4-peer cluster If there is no automated way of replicating the cold buckets... can I safely copy them from the RHEL7 node to the RHEL9 nodes ? (e.g. via scp)
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new ... See more...
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new entry or update the existing one. please help.   Thanks
I have few questions that I want your support. Recently we migrated from distributed to clustered environment.  Not yet get familiar with cluster env.  1st  question: On the migrated standalone ... See more...
I have few questions that I want your support. Recently we migrated from distributed to clustered environment.  Not yet get familiar with cluster env.  1st  question: On the migrated standalone search head we required to run Splunk App for CEF to transform some events into CEF format prior to send them. For some reason, for Splunk App for CEF to work,  we unrestricted "unsupported hotlinked imports" on that standalone search head  in " Settings -> Server Settings -> Internal Library Settings". Unfortunately  after migration, on the cluster members, I can't find setting "Server Settings, Server Control, etc". 1. a: I am wondering if this is a normal behavior for cluster members, If yes how can I unrestrict "unsupported hotlinked imports". 1. b: Also I am wondering if there is no other way to transform into CEF format without using: "Splunk App for CEF" 2nd question: We are using one instance as Cluster manager and search head deployer, I am wondering if it's normal to see the search head deployer listed among the search heads. Thank you
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as token... See more...
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as tokens. When the token value changes, this will trigger a JavaScript which "activates" panel B which is originally hidden. This will then create a popup consisting of Panel B that is populated with data passed from tokens from panel A.  Splunk has a default Export to PDF functionality. I know it uses the pdfgen_endpoint.py but how does clicking this button trigger the python script? Currently this functionality works for exporting dashboard X. How do I make adjustments so it can also work for panel B? /splunkd/__raw/services/pdfgen/render PDF endpoint must be called with one of the following args: 'input-dashboard=<dashboard-id>' or 'input-report=<report-id>' or 'input-dashboard-xml=<dashboard-xml>' but if I try to parse the XML it requires all token values to be resolved.  Please assist.  
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running ... See more...
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running an instance of Splunk version 8.1.5. When installing DBx 3.17.2, I installed OpenJDK 8, DBx started that it required Java 11. So I installed java-11-openjdk version 11.0.23.0.9.  Task Server JVM Options were automatically set to "-Ddw.server.applicationConnectors[0].port=9998". Is there anything else missing?   Is there a way to debug this issue? I looked into the internal logs from this host but have not been able to find anything that stands out.   Thanks for any insights and thoughts.
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export ser... See more...
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export services dependencies from ITSI? Thanks.
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe... See more...
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe it will make clear what is wanted, values in 2nd search events within milliseconds (2000 shown) of first search's event....     index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | stats _time as EVENT_TIME | append (search index=anIndex someSearchString | rex field=_raw "stuff(?<RELATED_VAL>)$" | eval timeBand=_time-EVENT_TIME | where abs(timeBand)<2000 | stats _time as RELATED_TIME) | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL    
Hello Would anyone know whether it is possible to migrate an on-prem smartstore to Splunk Cloud? How would that happen ? Thank you !
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approa... See more...
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approaches Universal Log forwarder and HTTP Event collector. We are inclining more towards using HEC as it has the ability to send ack for events as well and challenge with Universal Log forwarder is that it needs to be managed by customer where Splunk will be running and volume of the events is also not that much. Can someone help us in understanding cost involved in both approaches and scaling of HEC is number of events increases due to a spike. Also should we go with building a Technology Add-on or app which can be used along with Splunk Enterprise Security. We want to implement this for Enterprise as well as Cloud. #SplunkAddOnbuilder
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to ad... See more...
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to add the Info how the restart was triggered. so i can see whether the restart came from the manager (WebUI: Configuration Bundle Actions) or was done via the cli. Does Splunk log this? If yes where do i find that info? Thanks in advance!
I hve few events where data is not available. Instead I see commas where head6 and head7 data is not availble. Need rex so that I get output blank if no data but if data is available then it should p... See more...
I hve few events where data is not available. Instead I see commas where head6 and head7 data is not availble. Need rex so that I get output blank if no data but if data is available then it should provide output. below is the event (three commas beside between UNKNOWN AND /TEST)   head1,head2,head3,head4,head5,head6,head7,head8,head9,head10,head11,head12 sadfasdfafasdfs,2024-06 21T01:33:30.918000+00:00,test12,1,UNKNOWN,,,/test/rrr/swss/customer1/454554/test.xml,UNKNOWN,PASS,2024-06-21T01:33:30.213000+00:00,UNKNOWN
Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would... See more...
Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would be appreciated. I'm using this to generate a lookup which I can then search instead of using an expensive alltime. index=ndx sourcetype=src (device="PM4") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM4Val by _time index | appendcols [ search index=ndx sourcetype=src (device="PM2") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM2Val by _time index ] | streamstats current=f last(PM4Val) as LastPM4Val last(PM2Val) as LastPM2Val by index | eval PM4ValDelta = PM4Val - LastPM4Val, PM2ValDelta = PM2Val - LastPM2Val | table _time, index, PM4Val, PM4ValDelta, PM2Val, PM2ValDelta | sort index -_time