All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I will get a report for the last 5 days with the avalability of devices/tools. Therefor I search for all defect Devices at a day with the stats dc command. Afterwards I will calculate the available d... See more...
I will get a report for the last 5 days with the avalability of devices/tools. Therefor I search for all defect Devices at a day with the stats dc command. Afterwards I will calculate the available devices with the equation Avail = All - Defect and displays this for every day in the last week. But after the stats dc command are only 2 fields visible:   _time , dc_Devices. For my calculation I need the All field too. my search ... | eval All = 100 | bin span=1d _time | stats dc(Devices) as dc_Devices by _time How to add a calculation for  the available value? Thx in advance
Hi Team, I was comparing the Summary Index transaction time with the live Splunk server transaction time. I see all transactions collected in 15min bucket keep the same time and override the actual ... See more...
Hi Team, I was comparing the Summary Index transaction time with the live Splunk server transaction time. I see all transactions collected in 15min bucket keep the same time and override the actual transaction time. Is there a way to retain the original time while still keeping the count going in buckets define?   Nishant
Hello Team, How can I combine given below two searches and get the AWS instance name . aws-description-resource( (aws_account_id="*") , (region="*") , "ec2_instances") | search (private_ip_address... See more...
Hello Team, How can I combine given below two searches and get the AWS instance name . aws-description-resource( (aws_account_id="*") , (region="*") , "ec2_instances") | search (private_ip_address="172.20.187.54") index=c3d_security host=ip-172-23* rule=corp_deny_all_to_untrust NOT dest_port=4431 | table src_ip dest_ip transport dest_port application Note: I am getting the output as sr_ip , dest_ip , transport dest_port and application so how can I combine these two searches and add the AWS instance name as table.   Regards, Neelesh Tiwari
Hello,  need assistance on time format  input :                                                              output :  %F    (2021-11-23) 23 Nov 11/23/21 11/23/2021
Hi all. I'm fairly new to Splunk and regex. I've got many event logs and I'm making use of data models beforing generating different visualisations.   The fields discovered isn't good enough for my... See more...
Hi all. I'm fairly new to Splunk and regex. I've got many event logs and I'm making use of data models beforing generating different visualisations.   The fields discovered isn't good enough for my usecase thus I need to extract specific fields. Right now, using the following regex   (?<field_name>(([a-zA-Z]+(\.[a-zA-Z]+)+)_([a-zA-Z]+(|[a-zA-Z]+)+)|/^([^.]+)/))   , I'm able to extract this pattern    ABC|DEF|GHI   most accurately.  Subsequently, I would like extract each respective word into its own field. In total 3 different fields for ABC, DEF and GHI respectively. Is there a way I extract each individual word? How can perform regex expression on top of my current regex expression result? Thank you.
I have made my search query for all time because I have created dropdown for month date and year. But I want the search result to always display the latest result. How can I do that? I pass the ... See more...
I have made my search query for all time because I have created dropdown for month date and year. But I want the search result to always display the latest result. How can I do that? I pass the date month and year to the search query. But f or the default, I want the dashboard to always display the latest result  
hi I need to improve the subsearch below I explain : the piece of code in the subsearch count the number of core of the machine So this count is always the same no matter the time So I wonder if ... See more...
hi I need to improve the subsearch below I explain : the piece of code in the subsearch count the number of core of the machine So this count is always the same no matter the time So I wonder if it would better to put these results in a csv lookup and to query the csv lookup instead to query on the index? or is there some other tracks for improve this search? Thanks index=toto sourcetype=tutu type=* runq | fields host _time runq type | stats max(runq) as runq by host _time | join host [ search index=toto sourcetype=tutu type=* | fields host cpu_core | search host=1328 | stats max(cpu_core) as nbcore by host ] | eval Vel = (runq / nbcore) | eval _time = strftime(_time, "%d-%m-%y %H:%M:%S") | sort - _time | rename host as Host, _time as Heure | table Heure Host Vel | sort - Vel    
Hi Splunkers. I have an indexer cluster and all of sudden all of them goes up and down and stuck in BatchAdding status. I have 4 indexers. These are my settings:   [clustering] cluster_label = I... See more...
Hi Splunkers. I have an indexer cluster and all of sudden all of them goes up and down and stuck in BatchAdding status. I have 4 indexers. These are my settings:   [clustering] cluster_label = IndexerCluster mode = master rebalance_threshold = 0.95 replication_factor = 3 search_factor = 2 restart_timeout = 180 service_interval = 90 heartbeat_timeout = 180 cxn_timeout = 300 send_timeout = 300 rcv_timeout = 300 max_peer_build_load = 20 max_peer_rep_load = 50 max_fixup_time_ms = 0 maintenance_mode = false   I increase max_peer_build_load  to improve my fixup tasks but it doesn't work. I've followed the amount of buckets and it increases very slowly. I have this error in my splund.log file on indexers   ERROR ProcessTracker - (child_581__Fsck) BucketBuilder - BucketBuilder::error: Event data size is 0. Raw and Meta data may be missing for bucket="/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301"     WARN ProcessTracker - (child_601__Fsck) Fsck - Repair entire bucket, index=eventlog-online-index, tryWarmThenCold=1, bucket=/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301, exists=1, localrc=3, failReason=(entire bucket) Rebuild for bkt='/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301' failed: BucketBuilder::error: Event data size is 0. Raw and Meta data may be missing for bucket="/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301"   On the other hand I face with crash.log file on my indexers continuously   Received fatal signal 8 (Floating point exception). Cause: Integer division by zero at address [0x0000557E03DBB1D9]. Crashing thread: indexerPipe Registers: RIP: [0x0000557E03DBB1D9] _ZN12HotDBManager19computeBucketMapKeyERK15CowPipelineData + 121 (splunkd + 0xEF91D9) RDI: [0x00007F43D73836D0] RSI: [0x00007F43ABDAA72D] RBP: [0x00007F43C022EB40] RSP: [0x00007F43C07FD5A0] RAX: [0x07AC58C70206CAB3] RBX: [0x07AC58C70206CAB3] RCX: [0x0000000000000000] RDX: [0x0000000000000000] R8: [0x00000000000000B8] R9: [0x00007F43C8F3E060] R10: [0x00007F43D73867D0] R11: [0x00007F43D6200080] R12: [0x00007F43D7385E08] R13: [0x00007F43C07FD5F0] R14: [0x00007F43C02148E0] R15: [0x00007F43B6C2B500] EFL: [0x0000000000010246] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x002B000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x0000557E03DBB1D9] _ZN12HotDBManager19computeBucketMapKeyERK15CowPipelineData + 121 (splunkd + 0xEF91D9) [0x0000557E03DBCFDA] _ZN12HotDBManager15_suitableBucketERK15CowPipelineDatalRblR3Str + 410 (splunkd + 0xEFAFDA) [0x0000557E03DBF018] _ZN12HotDBManager10suitableDbERK15CowPipelineDatalRblR3Str + 24 (splunkd + 0xEFD018) [0x0000557E03E1AF53] _ZN11IndexWriter11_dbLazyLoadERK15CowPipelineDatall + 131 (splunkd + 0xF58F53) [0x0000557E03E1C054] _ZN11IndexWriter14write_internalER15CowPipelineDatalRP8DBBucketb + 308 (splunkd + 0xF5A054) [0x0000557E03E1C8D7] _ZN11IndexWriter10write_implER15CowPipelineDatalb + 103 (splunkd + 0xF5A8D7) [0x0000557E03E1CC43] _ZN11IndexWriter5writeER15CowPipelineDatal + 19 (splunkd + 0xF5AC43) [0x0000557E03E1404F] _ZN14IndexProcessor7executeER15CowPipelineData + 3951 (splunkd + 0xF5204F) [0x0000557E0433F585] _ZN9Processor20executeMultiLastStepER18PipelineDataVector + 101 (splunkd + 0x147D585) [0x0000557E03B2ABCA] _ZN8Pipeline4mainEv + 1418 (splunkd + 0xC68BCA) [0x0000557E048FD9D8] _ZN6Thread8callMainEPv + 120 (splunkd + 0x1A3B9D8) [0x00007F43D67D6609] ? (libpthread.so.0 + 0x2609) [0x00007F43D66FD263] clone + 67 (libc.so.6 + 0xFD263) Linux / indexer1-datacenter / 5.4.0-92-generic / #103-Ubuntu SMP Fri Nov 26 16:13:00 UTC 2021 / x86_64 /etc/debian_version: bullseye/sid Last errno: 2 Threads running: 72 Runtime: 8.643140s argv: [splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd] Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Thread: "indexerPipe", did_join=0, ready_to_run=Y, main_thread=N First 8 bytes of Thread token @0x7f43c2118e10: 00000000 00 e7 7f c0 43 7f 00 00 |....C...| 00000008 x86 CPUID registers: 0: 00000016 756E6547 6C65746E 49656E69 1: 00050657 08400800 7FFEFBFF BFEBFBFF 2: 76036301 00F0B5FF 00000000 00C30000 3: 00000000 00000000 00000000 00000000 4: 00000000 00000000 00000000 00000000 5: 00000040 00000040 00000003 00002020 6: 00000AF7 00000002 00000009 00000000 7: 00000000 00000000 00000000 00000000 8: 00000000 00000000 00000000 00000000 9: 00000000 00000000 00000000 00000000 A: 07300404 00000000 00000000 00000603 B: 00000000 00000000 0000002F 00000008 C: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 E: 00000000 00000000 00000000 00000000 F: 00000000 00000000 00000000 00000000 10: 00000000 00000000 00000000 00000000 11: 00000000 00000000 00000000 00000000 12: 00000000 00000000 00000000 00000000 13: 00000000 00000000 00000000 00000000 14: 00000000 00000000 00000000 00000000 15: 00000002 000000F0 00000000 00000000 16: 00000BB8 00000FA0 00000064 00000000 80000000: 80000008 00000000 00000000 00000000 80000001: 00000000 00000000 00000121 2C100800 80000002: 65746E49 2952286C 6F655820 2952286E 80000003: 6C6F4720 32362064 20523834 20555043 80000004: 2E332040 48473030 0000007A 00000000 80000005: 00000000 00000000 00000000 00000000 80000006: 00000000 00000000 01006040 00000000 80000007: 00000000 00000000 00000000 00000100 80000008: 0000302E 00000000 00000000 00000000 terminating...   My OS is Ubuntu server 20.04. Any suggestion? Can I bring up one indexer outside of my cluster to prevent log drop and after the cluster will be stable join it to cluster?
Hi guys, I'm working on a search that shows more that 10 accounts disabled within a five minute time frame. I feel like the dumbest girl on earth. I know my search works for the most part as the eve... See more...
Hi guys, I'm working on a search that shows more that 10 accounts disabled within a five minute time frame. I feel like the dumbest girl on earth. I know my search works for the most part as the events tab shows the exact amount of events that occurred within that period of time, however, the statistics tab does not display a table: index=wineventlog EventCode=4725 | bin span=5m _time | stats count(user), values(user) by _time EventCode | where count > 10 I also tried index=wineventlog EventCode=4725 | bin span=5m _time | table user, Time  | search count > 10 Any help would be much appreciated. Thanks  
I have just started using Dashboard Studio and was trying to use annotation on a timechart. My timechart is driven by a primary search on index=A. Its X-axis is (ofcourse) _time.  As annotation t... See more...
I have just started using Dashboard Studio and was trying to use annotation on a timechart. My timechart is driven by a primary search on index=A. Its X-axis is (ofcourse) _time.  As annotation to it, I want to source values from another search  from index=B.  However, I don't seem to be able to see annotations on it when the dashboard runs.  Where am I going wrong? I have pasted the primary search and annotation search below        "ds_6Ze3CeYO": { "type": "ds.search", "options": { "query": "index=sdc_offset source=\"/var/lib/sdc/runInfo/DAY*\"\n| eval pipeline_source = DAY\n| eval lag=_indextime-EpochTime \n| timechart span=5m max(lag) as lag(s)", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Base Search - Day latency timechart" }, "ds_eod_search": { "type": "ds.search", "options": { "query": "index=eodbatch \n| bin _time span=1m\n| fields _time JobName \n| eval annotation_label=case(JobName=\"first_event\",\"Batch Started here\",JobName=\"last_event\",\"Batch Ended here\")", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Annotation" },    
Hello, I want to calculate the count of total events, count of errors and show the total percent of the failures from total. my query is :  sourcetype=WalletExecuter Exception.Message="* BitGo *" ... See more...
Hello, I want to calculate the count of total events, count of errors and show the total percent of the failures from total. my query is :  sourcetype=WalletExecuter Exception.Message="* BitGo *" |stats count as total count(eval(Level="Error")) as FAILRUES by Exception.CorrelationId | eval Failure%=round((FAILRUES/total)*100, 2) but the results that returned are the percent of each CorrelationId how can i show the total failure percent ? thanks
Hi, I have a requirement, where I need to add tooltip on a text box when mouse pointer is hovered over in Studio Dashboard, it is possible? Best
Howdy I have a search like this: Everything is great!   Would it be possible to add a column that contains the timestamp for max(seconds) ? I've googled and even tried out some solutions I fo... See more...
Howdy I have a search like this: Everything is great!   Would it be possible to add a column that contains the timestamp for max(seconds) ? I've googled and even tried out some solutions I found here but can't quite get it... (i.e If I try to add "by host, _time" I get ALL the results).  Thanks!      
I'm trying to identify inactive hosts that crashed (through an alert). Inactive hosts - hosts that haven't logged in the past 1hr host that didn't crash- logs a message like this ".* Gracefully Exi... See more...
I'm trying to identify inactive hosts that crashed (through an alert). Inactive hosts - hosts that haven't logged in the past 1hr host that didn't crash- logs a message like this ".* Gracefully Exited" host that did crash- never logs a message like the one above ^ and eventually becomes inactive For inactive hosts, I've found this search to be useful. It searches the past 2 hours for host that haven't logged within the last hour: | tstats latest(_time) as latest where index=a sourcetype=b source = c earliest=-2h by host | eval logged_within_past_hour = if(latest > relative_time(now(),"-1h"),1,0), time_of_host_last_log = strftime(latest,"%c") | where logged_within_past_hour=0 I'm able to use this splunk search to find logs where the host terminated. index=a sourcetype=b Gracefully Exited Is there a way to find hosts that crashed and have became inactive? I don't want to include the hosts that terminated successfully and didn't crash
Hi All,  One basic thought(issue) on Splunk Search Bar UXD - User Experience Design: 1. on the Splunk Search Bar, enter a basic search "index=test1", then choose time picker to 15mins, then choose ... See more...
Hi All,  One basic thought(issue) on Splunk Search Bar UXD - User Experience Design: 1. on the Splunk Search Bar, enter a basic search "index=test1", then choose time picker to 15mins, then choose fast mode, then if you press "Enter" button, the search still wont run. you have to manually mouse hover above the search button and click Run button.  okies, this is at least fine.  2. After running first search, now leave the search as it is(index=test1) and then on time picker choose 60mins, the search will run "without" clicking Run button.   3. on this 2nd search, after choosing timepicker(the search will be running now), if you want to switch between search modes, you have to "stop/kill" the search and then choose a search mode. when you selected search mode, it will run the search without clicking Run button.  Hope you guys understand this User Experience issue, pls provide your views, thanks. 
We have been using the technique of having a setup.xml file in our apps default directory since Splunk version 6.2.X With the latest update to Splunk 8.2.2.1 our App does not seem to be able to comp... See more...
We have been using the technique of having a setup.xml file in our apps default directory since Splunk version 6.2.X With the latest update to Splunk 8.2.2.1 our App does not seem to be able to complete configuration. When the <Save> button is pressed, the Go to the Setup page is redisplayed. Did something change between Splunk 7.X and Splunk 8.X that would cause this to occur? Thanks.
The closest question I came to is this one, but it's not quite there (and it's old). I have a saved search - actually an alert, with actions - that I want to pass dynamic SPL into. You can do this w... See more...
The closest question I came to is this one, but it's not quite there (and it's old). I have a saved search - actually an alert, with actions - that I want to pass dynamic SPL into. You can do this with dashboards and tokens, of course, but I'm specifically looking for an alert that I'm executing over the API. So I may request something like this over the API: https://splunk.mycompany.com:8089/en-US/app/myApp/search?s=%2FservicesNS%2Fnobody%2FmyApp%2Fsaved%2Fsearches%2FmySearch&ExecID=12345 Where the saved search has something like "Execution_ID=$ExecID$" in it - just like you would when requesting a dashboard. The value for $ExecID$ is unique and populating a lookup table for this simple need seems like serious overkill - and it probab;y doesn't even accomplish what I need. I hope this is written clearly enough. I'm 99% sure it can't be done, but it's been a few years since that last question, and, as noted, it's not really a match, anyway. Thanks.
Greetings, I am in the preliminary stages of upgrading my Splunk Heavy Forwarder (HF), however, I wanted to confirm which file to install. I know that the HF requires a Splunk Enterprise License opp... See more...
Greetings, I am in the preliminary stages of upgrading my Splunk Heavy Forwarder (HF), however, I wanted to confirm which file to install. I know that the HF requires a Splunk Enterprise License opposed to the Universal Forwarder (UF) that doesn't require a Splunk Enterprise License. Therefore, when it comes to installing and upgrading a Heavy Forwarder, do I install the Splunk Forwarder License, the Splunk Enterprise License, or both?  Thank you in advance for your time. -KB 
I have an issue with a URL field being extracted improperly and failing when an ampersand is present in the URL field. Transforms indicates the following delims:     DELIMS = "\t", "="     ... See more...
I have an issue with a URL field being extracted improperly and failing when an ampersand is present in the URL field. Transforms indicates the following delims:     DELIMS = "\t", "="       Btool run on the SH member also shows that no other extract commands or delims are identified. All fields are extracted properly except for URL fields that have an ampersand, which excludes everything beyond the ampersand for the field value.  
I posted this subject a few days ago & a couple of champs stated that it was not advisable because it would over load the ES & was best to create the reports in ES to use the ES use cases. Ok, I have... See more...
I posted this subject a few days ago & a couple of champs stated that it was not advisable because it would over load the ES & was best to create the reports in ES to use the ES use cases. Ok, I have a ton of reports that I'd like to use in ES as well. So what other options are there in order not to put a burden on ES. I appreciate your response in advance. Happy 2022.