All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to make a report about how many alerts fired in a day. I saw in the job inspection I want all of these info, owner, apps, event, size and runtime. It's to determine how many alert overl... See more...
I want to make a report about how many alerts fired in a day. I saw in the job inspection I want all of these info, owner, apps, event, size and runtime. It's to determine how many alert overlapping each other, how many times that alert triggered. Prefer in SPL. Basically, I want for these information to help me make a detail report about alerts in our system.
Got this error on the search head, Please help us to resolve this . > Search peer xxxxxx has the following> message: The metric> value=0.00003393234971117585 provided> for > source=/opt/splunkforwa... See more...
Got this error on the search head, Please help us to resolve this . > Search peer xxxxxx has the following> message: The metric> value=0.00003393234971117585 provided> for > source=/opt/splunkforwarder/var/log/splunk/metrics.log,> sourcetype=splunk_metrics_log,> host=xxxxx, index=_metrics is not a> floating point value. Using a> "numeric" type rather than a "string"> type is recommended to avoid indexing> inefficiencies. Ensure the metric> value is provided as a floating point> number and not as a string. For> instance, provide 123.001 rather than> "123.001"
Hi everyone,  After upgrading heavyforwarder to ver 9 , we've  encountered following error "Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1219. Messa... See more...
Hi everyone,  After upgrading heavyforwarder to ver 9 , we've  encountered following error "Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1219. Message from 60F7CA48-C86F-47AD-B6EF-0B79273913A8:172.20.161.1:55892" .  Could you please assist to resolve the issue ?
Hope you are doing great.  Again facing a challenging and seeking some help. Prob statement  We have 200 windows server out of which 3 devices and not reporting suddenly. I tried to check the... See more...
Hope you are doing great.  Again facing a challenging and seeking some help. Prob statement  We have 200 windows server out of which 3 devices and not reporting suddenly. I tried to check the output.conf and server.conf it looks looks fine and I also compare those files with the working server.  Everything is fine. And yes I check the status of the non reporting server it is showing up and running and while using TTL the server is responding not Im unable to get the data on splunk. I don't have much idea what could be the root cause it will be great if you could suggest me something..  Note: Splunk installed on  on-prem  Thanks  Debjit   
Hi Team, We are unable to get the alert emails even when the events matching the alert condition is present in Splunk cloud. Please help how we can resolve this?
My add-on is tag.gz is created with local folder when I export it When I extract it it is created with local folder   When I'm trying to upload the add-on tar.gz I get this message ... See more...
My add-on is tag.gz is created with local folder when I export it When I extract it it is created with local folder   When I'm trying to upload the add-on tar.gz I get this message What is the problem? Thanks in advance, Amir
Hi All, How can I build a use case and get notified in Splunk when a user does not swipe his/her access card at the door but is logged into the domain? Please help.
Hi I have a task to display the Status of two of the urls in the following table format : URL Name In Usage Status http://lonmd1273241:4001/gmsg-mds/ Yes Up http://sfomd12... See more...
Hi I have a task to display the Status of two of the urls in the following table format : URL Name In Usage Status http://lonmd1273241:4001/gmsg-mds/ Yes Up http://sfomd1273241:4001/gmsg-mds/ No Up   The http://lonmd1273241:4001/gmsg-mds/ is printed in live logs for the application and the http://sfomd1273241:4001/gmsg-mds/ is not printed in logs. Can someone please help with a query to cretae such a table in a dashboard. Status code is also printed in logs  for http://lonmd1273241:4001/gmsg-mds/  which is used to display status column. Any help with query to produce such a dashboard would be help full 
How do I perform stats on a large number of fields matching a certain pattern without doing stats on each one individually? In a sample event below, there are 10+ fields with names beginning with "er... See more...
How do I perform stats on a large number of fields matching a certain pattern without doing stats on each one individually? In a sample event below, there are 10+ fields with names beginning with "er_". My task is to fire an alert if any of the values in these fields increases from the previous event. Sample event:   er_bad_eof: 0 er_bad_os: 0 er_crc: 0 er_crc_good_eof: 0 er_enc_in: 0 er_enc_out: 0 er_inv_arb: 0 er_lun_zone_miss: 0 er_multi_credit_loss: 0 er_other_discard: 11 er_pcs_blk: 0 er_rx_c3_timeout: 0 er_single_credit_loss: 0 er_toolong: 0 er_trunc: 0 er_tx_c3_timeout: 0 er_type1_miss: 0 er_type2_miss: 0 er_type6_miss: 0 er_unreachable: 0 er_unroutable: 11 er_zone_miss: 0 lgc_stats_clear_ts: Never phy_stats_clear_ts: Never port_description: slot12 port46 port_name: 382   SPL where I run stats on just two of those fields and where the "er_..._delta" values will be used to fire an alert if they're > 0:   index="sandbox" source="HEC" | stats count AS events, min(er_enc_out) AS er_enc_out_min, max(er_enc_out) AS er_enc_out_max, min(er_other_discard) AS er_other_discard_min, max(er_other_discard) AS er_other_discard_max, by host port_name, port_description | eval er_enc_out_delta = er_enc_out_max-er_enc_out_min, er_other_discard_delta = er_other_discard_max - er_other_discard_min | sort -er_enc_out_delta -er_other_discard_delta -er_enc_out_max -er_other_discard_max port_name   How do I run similar stats on all fields with names beginning with "er_"? Thanks!
Hi guys im new to Splunk,  Im trying to write a query to compare two search results and shows the differences and the matches, both search results are coming from the same index.  I would like to... See more...
Hi guys im new to Splunk,  Im trying to write a query to compare two search results and shows the differences and the matches, both search results are coming from the same index.  I would like to have something like this, where {path-values}  hold the paths values and {countpath} holds the count. Build-type   |  paths-count | matches-values    | diff-values             | matches-count | diff-count|  gradle           | 20K                  | {path-values}         | {path-values}        | {countpath}         | {countpath}  bazel             | 10K                  | {path-values}         | {path-values}       | {countpath}           | {countpath}  my index is based on this json, where total event is a 30k (number of json posted to splunk) {"source":"build","sourcetype":"json","event":{"type":"bazel","paths":["test3"]}} my current query looks like: index="build" type="bazel" | stats values(paths{}) as paths | stats count(eval(paths)) AS totalbazelpaths | mvexpand totalbazelpaths | eval eventFound = 0 | join type=left run_id paths [ index="build" type="gradle" | stats values(paths{}) as paths | stats count(eval(paths)) AS totalgradlepaths | mvexpand totalgradlepaths | eval eventFound=1] | eval percentage = round(totalbazelpaths/totalgradlepaths, 10) | table totalgradlepaths totalbazelpaths percentage any help how to achieve this? @yuanliu  Thanks 
Hello everyone, I have a lookup file which have 5 entry with filed name and field value as below "New_field"="yes", New_field1="yes", "New_field3"="yes", New_field4="Yes" I need to append a new... See more...
Hello everyone, I have a lookup file which have 5 entry with filed name and field value as below "New_field"="yes", New_field1="yes", "New_field3"="yes", New_field4="Yes" I need to append a new row to the lookup file with all the field value as "No". I am using the below command to do this |inputlookup sample_demo.csv |append [|inputlookup sample_demo.csv|eval "New_field"="no", New_field1="no", "New_field3"="no", New_field4="no"] this query is adding the new row but its adding 5 new row... I just need one row to append with new field value as "no" Can anyone please guide me on this, as what am i missing in the query  
I want to perform a search query which can give me results with respective to a specific time. For example i have a particular time as this:  2022-07-29 18:33:20 My query: index="*" sourcetype=... See more...
I want to perform a search query which can give me results with respective to a specific time. For example i have a particular time as this:  2022-07-29 18:33:20 My query: index="*" sourcetype="pan:threat" 10.196.246.104 url=*   earliest=relative_time("2022-07-29 18:33:20","-1h") AND latest = relative_time("2022-07-29 18:33:20","+1h") | stats values(url) as url by _time,dest_ip,dest_port,app,category,rule,action,user   I am not getting appropriate results with this, can anyone suggest how i can do the filtration on the basis of a particular time.
Hi,   just wondering for the Microsoft Cloud Services from the documentation it says it is only required on the search head cluster however it does say optional on the heavy forwarder. My question ... See more...
Hi,   just wondering for the Microsoft Cloud Services from the documentation it says it is only required on the search head cluster however it does say optional on the heavy forwarder. My question is usually I setup the inputs on the deployment server however as this is cloud data should the inputs.conf be on the heavy forwarder or the search head?   Thanks,   Joe
HI Splunkers,   Requirement: I have to create table for COUNT OF ERRORS based on text search in _raw data. I have created below query:     eventtype=XXX_AC_db ("Transaction (Process ID *)... See more...
HI Splunkers,   Requirement: I have to create table for COUNT OF ERRORS based on text search in _raw data. I have created below query:     eventtype=XXX_AC_db ("Transaction (Process ID *) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.*" OR "Rest Api POST error. Database has timed out. (TT-000346)") | rex field=Exception "System(?<m>.*):\s(?<message>.*)\s+at" | eval message=if(like(message,"%Transaction (Process ID %) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.%"),"Transaction (Process ID XX) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.",message) | stats count by message | append [ stats count | where count=0 | eval message="Transaction (Process ID XX) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."] | append [| search eventtype=XXX_AC_db "Rest Api POST error. Database has timed out. (TT-000346)" | stats count by Message | rename Message as message] | append [ stats count | where count=0 | eval message="Rest Api POST error. Database has timed out. (MG-000346)"] | append [| search eventtype=XXX_AC_db "*Database has timed out. (TT-000346)*" | eval Message=if(like(Message,"%Database has timed out. (TT-000346)%"),"Database has timed out. (TT-000346)",Message) | stats count by Message | rename Message as message] ...................       This query is taking too much time to execute. Is there any other way so that we can include different search and get the result.   Thank you in advance.
Hello everybody  my query :   index=logarithm SrcAddr="192.168.148.1" |eval flag=case(DestAddr="192.168.148.7" OR DestAddr="192.168.148.8" OR DestAddr="192.168.148.24" ,"LAN 1",DestAddr="192.168... See more...
Hello everybody  my query :   index=logarithm SrcAddr="192.168.148.1" |eval flag=case(DestAddr="192.168.148.7" OR DestAddr="192.168.148.8" OR DestAddr="192.168.148.24" ,"LAN 1",DestAddr="192.168.148.21" OR DestAddr="192.168.148.36" OR DestAddr="192.168.148.37" ,"LAN 4" , DestAddr="192.168.148.33" OR DestAddr="192.168.148.34" OR DestAddr="192.168.148.35","LAN 5") |chart count over flag by DestAddr useother=f usenull=f   and in trellis mode there are all DestAddrs for each flag! (as we can see in picture) but I want not to show DestAddrs with 0 values in every chart  by "LAN 1" just show "192.168.148.7" or  "192.168.148.8"  or "192.168.148.24" by "LAN 4" just show "192.168.148.21" or  "192.168.148.36"  or "192.168.148.37" by "LAN 5" just show "192.168.148.33" or  "192.168.148.34"  or "192.168.148.35 "
We have built one dashboard using splunk dashboard studio in absolute layout. We have added some rectangle shapes to the dashboard and added Color to it as well. Now we want to add the Color chan... See more...
We have built one dashboard using splunk dashboard studio in absolute layout. We have added some rectangle shapes to the dashboard and added Color to it as well. Now we want to add the Color change hovering functionality to the rectangle shape. Is there any to achieve this dashboard studio?    
We have built one dashboard using Splunk Dashboard Studio method by using absolute layout. Now we want to remove or hide splunk enterprise bar from the dashboard since client doesn’t want it there ... See more...
We have built one dashboard using Splunk Dashboard Studio method by using absolute layout. Now we want to remove or hide splunk enterprise bar from the dashboard since client doesn’t want it there in the dashboard. Since we use json in dashboard studio I ma not getting any workaround to hide splunk enterprise logo bar. Can someone help me on this?      
Hi,  I have 4 sources from one sourcetype . so i am getting data from 3 sources but not from other 1 source. Logs are present , but not showing up in splunk. checked inputs.conf  everything is--s... See more...
Hi,  I have 4 sources from one sourcetype . so i am getting data from 3 sources but not from other 1 source. Logs are present , but not showing up in splunk. checked inputs.conf  everything is--same configuration for all 4 sources. crccsalt=source  is also there in inputs.config. restarted the servers, but still not able to see the data Can you please tell me anything i am missing.
After following, well verified steps as noted in > https://community.splunk.com/t5/Deployment-Architecture/How-to-move-the-SHC-deployer-to-another-host-Part-2/m-p/604671#M25839 I was not able to suc... See more...
After following, well verified steps as noted in > https://community.splunk.com/t5/Deployment-Architecture/How-to-move-the-SHC-deployer-to-another-host-Part-2/m-p/604671#M25839 I was not able to successfully connect and test a push from the new deployer to the shcluster members.  I received an error >>> Error while deploying apps to first member, aborting apps deployment to all members: Error while fetching apps baseline on target=https://host:8089: Non-200/201 status_code=401; {"messages":[{"type":"ERROR","text":"Unauthorized"}]} Here are my steps: 1. copied the contents of /opt/splunk/etc/shcluster from the old deployer to the new deployer /opt/splunk/etc/shcluster 2) configured the new deployer [shclustering] stanza with the info from the old deployer [shclustering] stanza in /opt/splunk/etc/system/local server.conf 3) Updated conf_deploy_fetch_url in server.conf on each of the shc members 4) restarted the new deployer and a rolling restart on the shc members 5) did a test apply bundle and then received an error unauthorized. I believe the issue could be with the pass4SymmKey (on the new deployer) not being the same as the pass4SymmKey on the SHC members. I did a ./splunk show-decrypt --value <key> from the old deployer [shclustering] pass4SymmKey = <key> shcluster_label = Company_shcluster1 I used the decrypted key as the key for the new deployer pass3SymmKey but ultimately I am not able to run a successful push. Is there a way to recover these keys? The previous admin did not save the original secrets used to setup the deployer. Any advice greatly appreciated. Thank you
Hi there, I am using REHL 8.6 x86_64 (0otpa) / Kernel 4.18.0 and trying to update Splunk Add-on for Unix and Linux...I am getting this error - An error occurred while downloading the app An error oc... See more...
Hi there, I am using REHL 8.6 x86_64 (0otpa) / Kernel 4.18.0 and trying to update Splunk Add-on for Unix and Linux...I am getting this error - An error occurred while downloading the app An error occurred while downloading the app: [HTTP 404] https://127.0.0.1:8089/services/apps/local/Splunk_TA_nix/update; [{'type': 'ERROR', 'code': None, 'text': 'Error downloading update from https://splunkbase.splunk.com/app/833/release/8.6.0/download/?origin=cfu: Not Found'}] When I manually tried to download from this link, - https://splunkbase.splunk.com/app/833/release/8.6.0/download/?origin=cfu - I am getting Oops! 404 Error: Page not found. Please share your thoughts on how to update linux / unix app from the Splunk console