All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team I have a user that left the company and now their dashboard searches are alerting as "orphaned objects". I reassigned all of their objects to me, cloned their dashboards (scream test), bu... See more...
Hi team I have a user that left the company and now their dashboard searches are alerting as "orphaned objects". I reassigned all of their objects to me, cloned their dashboards (scream test), but when I go to (settings > user interface > views) to delete them I see no delete option except for the clones I made.  I changed the permissions on the dashboards to read/write the sc_admin role only  I (Admin) own all the objects now These dashboards were user made and not apart of 3rd party app What am I missing? I have a few screen shots below to show better what I am explaining Screen Shot of 'views' The clones are private the originals are not screen shot of what permissions are for original object I want to delete sc_admin has all the capabilities it can have assigned to it
Hi, Is there a recommendation or a guideline available by Splunk on naming convention for INDEXES I have a new Splunk Enterprise environment at my company with a plan of roughly 70 data sources to ... See more...
Hi, Is there a recommendation or a guideline available by Splunk on naming convention for INDEXES I have a new Splunk Enterprise environment at my company with a plan of roughly 70 data sources to be onboarded one after the other. For example the Windows TA has  I would be happy for each input i can get from you. Thank you in advance, Jay  
Hello , Has anyone configured Proofpoint ET or VirusTotal Adaptive response action in ES ?  Basically look up the destination IP from events against these websites . Can someone please advise how to ... See more...
Hello , Has anyone configured Proofpoint ET or VirusTotal Adaptive response action in ES ?  Basically look up the destination IP from events against these websites . Can someone please advise how to configure this ? For Proofpoint Check ET, it asks for Object .  What is Object here ?  
is it possible to append more than 10k records between 2 index?How to overcome this withou modifying conf file and adding any parameter thro SPL
  I am trying to set a token ($TimeFrame$) to contain the same text as displayed by the Time Frame filter after having selected any particular time picker range – in this case “Last 13 Days” selecte... See more...
  I am trying to set a token ($TimeFrame$) to contain the same text as displayed by the Time Frame filter after having selected any particular time picker range – in this case “Last 13 Days” selected from  Relative section of the Time Picker – but any time picker range or preset text being displayed in the Time Frame filter must work - see diagram below.            I would like to extract exactly the same text that Splunk>Enterprise puts in the filter display box and assign it to my token $TimeFrame$. I can only find solutions that work in a limited number of cases because it involves trying to convert the formatted earliest and latest tokens  back into text, for example, the code below works some of the time, but not for “Last 13 Days”, and is very messy having to deal with with special cases individually, for example  “All Time”:         <eval token="picktime">"From ".strftime($field1.earliest$,"%H:%M %e-%b-%Y")." to ".strftime($field1.latest$,"%H:%M %e-%b-%Y")</eval>         <eval token="TimeFrame">if($picktime$ == "From 01:00 1-Jan-1970 to 01:00 1-Jan-1970" OR $picktime$ == "From 00:00 1-Jan-1970 to 00:00 1-Jan-1970","All time",$picktime$)</eval> Anyone know of a better way of doing this? Mike  
I have two columns one is datacenter location and second- number of servers, I want to show this on map, how to show it without latitude and longitude details. Do I need to upload csv with latitude a... See more...
I have two columns one is datacenter location and second- number of servers, I want to show this on map, how to show it without latitude and longitude details. Do I need to upload csv with latitude and longitude for all these locations ? And then using geostat command  . location of datacenter - New York , Texas, Amsterdam, Mumbai 
So its great we can now import icons and images to Dashboard Studio pages.   It looks like they get stored in a KV store somewhere, however how can we manage these? For instance if I want a copy of... See more...
So its great we can now import icons and images to Dashboard Studio pages.   It looks like they get stored in a KV store somewhere, however how can we manage these? For instance if I want a copy of the dashboard on another Splunk instance,  how can I make sure that the icons and images are on the second instance. I don't see any way to manage these without having to manually update the dashboard on the second instance.... Help appreciated.
Hello, I've been trying to get data in SSE, but somehow I can't. The setup is the following - Installed Splunk Enterprise, Universal Forwarder, Forecpoint app, Syslog-ng(for receiving the logs, whic... See more...
Hello, I've been trying to get data in SSE, but somehow I can't. The setup is the following - Installed Splunk Enterprise, Universal Forwarder, Forecpoint app, Syslog-ng(for receiving the logs, which i monitor with the UF) and Splunk Security Essentials. I've tried different things with the demo data but when I'm trying to do anything with the live data i hit the wall.  I've tried to follow https://docs.splunk.com/Documentation/SSE/3.4.0/Install/ConfigureSSE these instructions, but they seem unclear and somehow inaccurate(For example in the chapter for getting data in - Configure the products you have in your environment with the Data Inventory dashboard. When I browse in the web interface there is no option to "2.b.Click Manually Configure to manually enter your data.") .   The first thing I've noticed was that this error for the ES Integration was thrown, for which i didn't find any information. When I open any use cases and for example "Basic Scanning", the sourcetype and index for forcepoint (index="forcepoint", sourcetype="next-generation-firewall") are missing by default. Are there any ways to add it automatically for all the use cases?   I've already have logs monitored by the indexer forwarded by the Forcepoint which are displayed in the Splunk Search and Reporting and Forcepoint App.   Even if i change the index and sourcetype in the enter a search field I still get these results. Can you give me any info on the tags, like what are they and what are they used for?   Any guides or tips will be highly appreciated, thanks!
I will get a report for the last 5 days with the avalability of devices/tools. Therefor I search for all defect Devices at a day with the stats dc command. Afterwards I will calculate the available d... See more...
I will get a report for the last 5 days with the avalability of devices/tools. Therefor I search for all defect Devices at a day with the stats dc command. Afterwards I will calculate the available devices with the equation Avail = All - Defect and displays this for every day in the last week. But after the stats dc command are only 2 fields visible:   _time , dc_Devices. For my calculation I need the All field too. my search ... | eval All = 100 | bin span=1d _time | stats dc(Devices) as dc_Devices by _time How to add a calculation for  the available value? Thx in advance
Hi Team, I was comparing the Summary Index transaction time with the live Splunk server transaction time. I see all transactions collected in 15min bucket keep the same time and override the actual ... See more...
Hi Team, I was comparing the Summary Index transaction time with the live Splunk server transaction time. I see all transactions collected in 15min bucket keep the same time and override the actual transaction time. Is there a way to retain the original time while still keeping the count going in buckets define?   Nishant
Hello Team, How can I combine given below two searches and get the AWS instance name . aws-description-resource( (aws_account_id="*") , (region="*") , "ec2_instances") | search (private_ip_address... See more...
Hello Team, How can I combine given below two searches and get the AWS instance name . aws-description-resource( (aws_account_id="*") , (region="*") , "ec2_instances") | search (private_ip_address="172.20.187.54") index=c3d_security host=ip-172-23* rule=corp_deny_all_to_untrust NOT dest_port=4431 | table src_ip dest_ip transport dest_port application Note: I am getting the output as sr_ip , dest_ip , transport dest_port and application so how can I combine these two searches and add the AWS instance name as table.   Regards, Neelesh Tiwari
Hello,  need assistance on time format  input :                                                              output :  %F    (2021-11-23) 23 Nov 11/23/21 11/23/2021
Hi all. I'm fairly new to Splunk and regex. I've got many event logs and I'm making use of data models beforing generating different visualisations.   The fields discovered isn't good enough for my... See more...
Hi all. I'm fairly new to Splunk and regex. I've got many event logs and I'm making use of data models beforing generating different visualisations.   The fields discovered isn't good enough for my usecase thus I need to extract specific fields. Right now, using the following regex   (?<field_name>(([a-zA-Z]+(\.[a-zA-Z]+)+)_([a-zA-Z]+(|[a-zA-Z]+)+)|/^([^.]+)/))   , I'm able to extract this pattern    ABC|DEF|GHI   most accurately.  Subsequently, I would like extract each respective word into its own field. In total 3 different fields for ABC, DEF and GHI respectively. Is there a way I extract each individual word? How can perform regex expression on top of my current regex expression result? Thank you.
I have made my search query for all time because I have created dropdown for month date and year. But I want the search result to always display the latest result. How can I do that? I pass the ... See more...
I have made my search query for all time because I have created dropdown for month date and year. But I want the search result to always display the latest result. How can I do that? I pass the date month and year to the search query. But f or the default, I want the dashboard to always display the latest result  
hi I need to improve the subsearch below I explain : the piece of code in the subsearch count the number of core of the machine So this count is always the same no matter the time So I wonder if ... See more...
hi I need to improve the subsearch below I explain : the piece of code in the subsearch count the number of core of the machine So this count is always the same no matter the time So I wonder if it would better to put these results in a csv lookup and to query the csv lookup instead to query on the index? or is there some other tracks for improve this search? Thanks index=toto sourcetype=tutu type=* runq | fields host _time runq type | stats max(runq) as runq by host _time | join host [ search index=toto sourcetype=tutu type=* | fields host cpu_core | search host=1328 | stats max(cpu_core) as nbcore by host ] | eval Vel = (runq / nbcore) | eval _time = strftime(_time, "%d-%m-%y %H:%M:%S") | sort - _time | rename host as Host, _time as Heure | table Heure Host Vel | sort - Vel    
Hi Splunkers. I have an indexer cluster and all of sudden all of them goes up and down and stuck in BatchAdding status. I have 4 indexers. These are my settings:   [clustering] cluster_label = I... See more...
Hi Splunkers. I have an indexer cluster and all of sudden all of them goes up and down and stuck in BatchAdding status. I have 4 indexers. These are my settings:   [clustering] cluster_label = IndexerCluster mode = master rebalance_threshold = 0.95 replication_factor = 3 search_factor = 2 restart_timeout = 180 service_interval = 90 heartbeat_timeout = 180 cxn_timeout = 300 send_timeout = 300 rcv_timeout = 300 max_peer_build_load = 20 max_peer_rep_load = 50 max_fixup_time_ms = 0 maintenance_mode = false   I increase max_peer_build_load  to improve my fixup tasks but it doesn't work. I've followed the amount of buckets and it increases very slowly. I have this error in my splund.log file on indexers   ERROR ProcessTracker - (child_581__Fsck) BucketBuilder - BucketBuilder::error: Event data size is 0. Raw and Meta data may be missing for bucket="/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301"     WARN ProcessTracker - (child_601__Fsck) Fsck - Repair entire bucket, index=eventlog-online-index, tryWarmThenCold=1, bucket=/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301, exists=1, localrc=3, failReason=(entire bucket) Rebuild for bkt='/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301' failed: BucketBuilder::error: Event data size is 0. Raw and Meta data may be missing for bucket="/Splunk-Storage/HOT/eventlog-online-index/db_1641702441_1641656220_301"   On the other hand I face with crash.log file on my indexers continuously   Received fatal signal 8 (Floating point exception). Cause: Integer division by zero at address [0x0000557E03DBB1D9]. Crashing thread: indexerPipe Registers: RIP: [0x0000557E03DBB1D9] _ZN12HotDBManager19computeBucketMapKeyERK15CowPipelineData + 121 (splunkd + 0xEF91D9) RDI: [0x00007F43D73836D0] RSI: [0x00007F43ABDAA72D] RBP: [0x00007F43C022EB40] RSP: [0x00007F43C07FD5A0] RAX: [0x07AC58C70206CAB3] RBX: [0x07AC58C70206CAB3] RCX: [0x0000000000000000] RDX: [0x0000000000000000] R8: [0x00000000000000B8] R9: [0x00007F43C8F3E060] R10: [0x00007F43D73867D0] R11: [0x00007F43D6200080] R12: [0x00007F43D7385E08] R13: [0x00007F43C07FD5F0] R14: [0x00007F43C02148E0] R15: [0x00007F43B6C2B500] EFL: [0x0000000000010246] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x002B000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x0000557E03DBB1D9] _ZN12HotDBManager19computeBucketMapKeyERK15CowPipelineData + 121 (splunkd + 0xEF91D9) [0x0000557E03DBCFDA] _ZN12HotDBManager15_suitableBucketERK15CowPipelineDatalRblR3Str + 410 (splunkd + 0xEFAFDA) [0x0000557E03DBF018] _ZN12HotDBManager10suitableDbERK15CowPipelineDatalRblR3Str + 24 (splunkd + 0xEFD018) [0x0000557E03E1AF53] _ZN11IndexWriter11_dbLazyLoadERK15CowPipelineDatall + 131 (splunkd + 0xF58F53) [0x0000557E03E1C054] _ZN11IndexWriter14write_internalER15CowPipelineDatalRP8DBBucketb + 308 (splunkd + 0xF5A054) [0x0000557E03E1C8D7] _ZN11IndexWriter10write_implER15CowPipelineDatalb + 103 (splunkd + 0xF5A8D7) [0x0000557E03E1CC43] _ZN11IndexWriter5writeER15CowPipelineDatal + 19 (splunkd + 0xF5AC43) [0x0000557E03E1404F] _ZN14IndexProcessor7executeER15CowPipelineData + 3951 (splunkd + 0xF5204F) [0x0000557E0433F585] _ZN9Processor20executeMultiLastStepER18PipelineDataVector + 101 (splunkd + 0x147D585) [0x0000557E03B2ABCA] _ZN8Pipeline4mainEv + 1418 (splunkd + 0xC68BCA) [0x0000557E048FD9D8] _ZN6Thread8callMainEPv + 120 (splunkd + 0x1A3B9D8) [0x00007F43D67D6609] ? (libpthread.so.0 + 0x2609) [0x00007F43D66FD263] clone + 67 (libc.so.6 + 0xFD263) Linux / indexer1-datacenter / 5.4.0-92-generic / #103-Ubuntu SMP Fri Nov 26 16:13:00 UTC 2021 / x86_64 /etc/debian_version: bullseye/sid Last errno: 2 Threads running: 72 Runtime: 8.643140s argv: [splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd] Regex JIT enabled RE2 regex engine enabled using CLOCK_MONOTONIC Thread: "indexerPipe", did_join=0, ready_to_run=Y, main_thread=N First 8 bytes of Thread token @0x7f43c2118e10: 00000000 00 e7 7f c0 43 7f 00 00 |....C...| 00000008 x86 CPUID registers: 0: 00000016 756E6547 6C65746E 49656E69 1: 00050657 08400800 7FFEFBFF BFEBFBFF 2: 76036301 00F0B5FF 00000000 00C30000 3: 00000000 00000000 00000000 00000000 4: 00000000 00000000 00000000 00000000 5: 00000040 00000040 00000003 00002020 6: 00000AF7 00000002 00000009 00000000 7: 00000000 00000000 00000000 00000000 8: 00000000 00000000 00000000 00000000 9: 00000000 00000000 00000000 00000000 A: 07300404 00000000 00000000 00000603 B: 00000000 00000000 0000002F 00000008 C: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 E: 00000000 00000000 00000000 00000000 F: 00000000 00000000 00000000 00000000 10: 00000000 00000000 00000000 00000000 11: 00000000 00000000 00000000 00000000 12: 00000000 00000000 00000000 00000000 13: 00000000 00000000 00000000 00000000 14: 00000000 00000000 00000000 00000000 15: 00000002 000000F0 00000000 00000000 16: 00000BB8 00000FA0 00000064 00000000 80000000: 80000008 00000000 00000000 00000000 80000001: 00000000 00000000 00000121 2C100800 80000002: 65746E49 2952286C 6F655820 2952286E 80000003: 6C6F4720 32362064 20523834 20555043 80000004: 2E332040 48473030 0000007A 00000000 80000005: 00000000 00000000 00000000 00000000 80000006: 00000000 00000000 01006040 00000000 80000007: 00000000 00000000 00000000 00000100 80000008: 0000302E 00000000 00000000 00000000 terminating...   My OS is Ubuntu server 20.04. Any suggestion? Can I bring up one indexer outside of my cluster to prevent log drop and after the cluster will be stable join it to cluster?
Hi guys, I'm working on a search that shows more that 10 accounts disabled within a five minute time frame. I feel like the dumbest girl on earth. I know my search works for the most part as the eve... See more...
Hi guys, I'm working on a search that shows more that 10 accounts disabled within a five minute time frame. I feel like the dumbest girl on earth. I know my search works for the most part as the events tab shows the exact amount of events that occurred within that period of time, however, the statistics tab does not display a table: index=wineventlog EventCode=4725 | bin span=5m _time | stats count(user), values(user) by _time EventCode | where count > 10 I also tried index=wineventlog EventCode=4725 | bin span=5m _time | table user, Time  | search count > 10 Any help would be much appreciated. Thanks  
I have just started using Dashboard Studio and was trying to use annotation on a timechart. My timechart is driven by a primary search on index=A. Its X-axis is (ofcourse) _time.  As annotation t... See more...
I have just started using Dashboard Studio and was trying to use annotation on a timechart. My timechart is driven by a primary search on index=A. Its X-axis is (ofcourse) _time.  As annotation to it, I want to source values from another search  from index=B.  However, I don't seem to be able to see annotations on it when the dashboard runs.  Where am I going wrong? I have pasted the primary search and annotation search below        "ds_6Ze3CeYO": { "type": "ds.search", "options": { "query": "index=sdc_offset source=\"/var/lib/sdc/runInfo/DAY*\"\n| eval pipeline_source = DAY\n| eval lag=_indextime-EpochTime \n| timechart span=5m max(lag) as lag(s)", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Base Search - Day latency timechart" }, "ds_eod_search": { "type": "ds.search", "options": { "query": "index=eodbatch \n| bin _time span=1m\n| fields _time JobName \n| eval annotation_label=case(JobName=\"first_event\",\"Batch Started here\",JobName=\"last_event\",\"Batch Ended here\")", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Annotation" },    
Hello, I want to calculate the count of total events, count of errors and show the total percent of the failures from total. my query is :  sourcetype=WalletExecuter Exception.Message="* BitGo *" ... See more...
Hello, I want to calculate the count of total events, count of errors and show the total percent of the failures from total. my query is :  sourcetype=WalletExecuter Exception.Message="* BitGo *" |stats count as total count(eval(Level="Error")) as FAILRUES by Exception.CorrelationId | eval Failure%=round((FAILRUES/total)*100, 2) but the results that returned are the percent of each CorrelationId how can i show the total failure percent ? thanks
Hi, I have a requirement, where I need to add tooltip on a text box when mouse pointer is hovered over in Studio Dashboard, it is possible? Best