All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have a very simple use case and that is to display the time difference between 2 fields that already have their values as time in epoch format.   But when i use ctime to display the differe... See more...
Hi All, I have a very simple use case and that is to display the time difference between 2 fields that already have their values as time in epoch format.   But when i use ctime to display the difference, it shows weird results.  As shown below my events contains 2 fields ( tt0 & tt1). Their values are  timestamp in EPOCH. If we manually  convert these to Human Readable Time , the difference between the tt0 and tt1 is just 03 mins and xx seconds.   tto tt1 1675061542  1675061732 But when i do a      | eval ttc=tt1-tt0 | convert ctime(ttc)     Splunk displays ttc as follows:   12/31/1969 18:56:49.2304990  What am i doing wrong here?  How to make it display ttc correctly ?
We are planning to upgrade our Splunk_TA_windows app (8.5.0 atm) to the latest version, and during the deep-dive into props and transforms I noticed all these transforms being called from Perfmon sou... See more...
We are planning to upgrade our Splunk_TA_windows app (8.5.0 atm) to the latest version, and during the deep-dive into props and transforms I noticed all these transforms being called from Perfmon sourcetypes. Example:   [Perfmon:Processor] EVAL-cpu_user_percent = if(counter=="% User Time",Value,null()) EVAL-cpu_load_percent = if(counter=="% Processor Time",Value,null()) FIELDALIAS-cpu_instance = instance AS cpu_instance EVAL-cpu_interrupts = if(counter=="Interrupts/sec" AND instance=="_Total",Value,null()) ## Creation of redundant EVAL to avoid tag expansion issue ADDON-10972 EVAL-windows_cpu_load_percent = if(counter=="% Processor Time",Value,null()) FIELDALIAS-dest_for_perfmon = host AS dest FIELDALIAS-src_for_perfmon = host AS src TRANSFORMS-_value_for_perfmon_metrics_store = value_for_perfmon_metrics_store TRANSFORMS-metric_name_for_perfmon_metrics_store = metric_name_for_perfmon_metrics_store TRANSFORMS-object_for_perfmon_metrics_store = object_for_perfmon_metrics_store TRANSFORMS-instance_for_perfmon_metrics_store = instance_for_perfmon_metrics_store TRANSFORMS-collection_for_perfmon_metrics_store = collection_for_perfmon_metrics_store EVAL-metric_type = "gauge"   These transforms seem to extract data and store them in meta fields, like this one:   [value_for_perfmon_metrics_store] REGEX = Value=\"?([^\"\r\n]*[^\"\s]) FORMAT = _value::$1 WRITE_META = true      We have untill now indexed Perfmon data to event indexes - Will these transforms lead to unneccessary data storage on the indexer cluster? Should we comment out the transforms untill we're ready to move Perfmon data over to metrics indexes?
Why does Walklex return spaces before some of the field names, but fieldsummary does not? When I see this without field extractions causing spaces in the field names, it usually looks like "special" ... See more...
Why does Walklex return spaces before some of the field names, but fieldsummary does not? When I see this without field extractions causing spaces in the field names, it usually looks like "special" fields this happens to. But these fields don't seem to exist if I try to search for or using them. Is this as simple as an output parsing bug from walklex or an indexing bug adding a space? If so,  1. Should the space be trimmed or the event be removed to get the correct results? 2. Any context on why this is happening with specific fields? fieldsummary command with no spaces in field names:   index=indexName | fieldsummary | stats count by field   Example results from fieldsummary: field host source sourcetype timestamp walklex command with spaces in field names:   | walklex index=indexName type=field | stats count by field   Example results from walklex: field  host  timestamp host timestamp
I was trying to send data through Splunk HEC (Http event Collector). curl http://ip:8088/services/collector -H "Authorization: Splunk <HEC_TOKEN>" -d '{"event": "Test1"}{"event": "Test2"}{"event": "... See more...
I was trying to send data through Splunk HEC (Http event Collector). curl http://ip:8088/services/collector -H "Authorization: Splunk <HEC_TOKEN>" -d '{"event": "Test1"}{"event": "Test2"}{"event": "Test3"}'   So,  in splunk it will goes like Test1 (-> Event 1) Test2 (-> Event 2) Test3 (-> Event 3) Result I want: Test1Test2Test3  (as 1 event in splunk.)  
Hi I'm implementing some searches provided by Splunk Threat Research Team to detect threats from AD logs. But I cannot set all required fields. For example, one of them is below.  "Windows Comput... See more...
Hi I'm implementing some searches provided by Splunk Threat Research Team to detect threats from AD logs. But I cannot set all required fields. For example, one of them is below.  "Windows Computer Account Requesting Kerberos Ticket" (https://research.splunk.com/endpoint/fb3b2bb3-75a4-4279-848a-165b42624770/) It requires some fields that I cannot find , such as subject, and action. Below is a sample log. I can't find which value I should extract as a "subject" and "action".  I use "WinEventLog:Security" as sourcetype.  I installed the TA-microsoft-windows.  Thank you. LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4768 EventType=0 Type=Information ComputerName=win-dc-128.attackrange.local TaskCategory=Kerberos Authentication Service OpCode=Info RecordNumber=2106676187 Keywords=Audit Success Message=A Kerberos authentication ticket (TGT) was requested. Account Information: Account Name: PC-DEMO$ Supplied Realm Name: attackrange.local User ID: ATTACKRANGE\PC-DEMO$ Service Information: Service Name: krbtgt Service ID: ATTACKRANGE\krbtgt Network Information: Client Address: ::ffff:10.0.1.15 Client Port: 59022 Additional Information: Ticket Options: 0x40800010 Result Code: 0x0 Ticket Encryption Type: 0x12 Pre-Authentication Type: 2 Certificate Information: Certificate Issuer Name: Certificate Serial Number: Certificate Thumbprint:    
please help,i used _time from date log, and i using time from windowstime, but i tried substraction bot of them not result in coloumn durationday   stats max(_time) as lastlogin by user |eval n=tim... See more...
please help,i used _time from date log, and i using time from windowstime, but i tried substraction bot of them not result in coloumn durationday   stats max(_time) as lastlogin by user |eval n=time()|eval today=strftime(n,"%m-%d-%Y %H:%M:%S.%Q")| eval durationday = lastlogin - today | table user,lastlogin,today,durationday   and result this user lastlogin today durationday dsadadnk12 01-30-2023 11:10:27.208 01-30-2023 11:25:14.000  
Hi splunk god, Have enquiry, i have an environment which heavyforwarder logs send to cluster indexer. I need the below multi index merge into single index which is index_general. Basically,... See more...
Hi splunk god, Have enquiry, i have an environment which heavyforwarder logs send to cluster indexer. I need the below multi index merge into single index which is index_general. Basically, when user search index_general and able to search all the logs contain in the three index. 1)Is this configuration feasible? index_fw->index_general index_window->index_general index_linux->index_general 2)If yes, this configuration needs to be done on HF or Indexer? 3)if qns2 yes, which config file should be configured.
Hi I am tracking service requests and responses and trying to create a table that contains both requests and response but requests and responses are in different lines ingested in splunk. I have a c... See more...
Hi I am tracking service requests and responses and trying to create a table that contains both requests and response but requests and responses are in different lines ingested in splunk. I have a common field (trace) which is available in both the strings and unique for a set of request and response pairs,  example line1: trace: 12345 , Request Received: {1}, URL:http:// line2: trace: 12346 , Request Received: {2}, URL:http:// line3: trace:12345 , Reponse provided: {3} line4: trace:12346 , Reponse provided :{4}   In line1 and line 3 trace is common field and so is in line 1 and line 4 I want end result like in a table   trace      request     response 12345   {1}            {3} 12346  {2}            {4}  
Hi all, when I try to update any installed apps from the GUI I receive a 500 internal error. Checking the _internal logs I see this: File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__in... See more...
Hi all, when I try to update any installed apps from the GUI I receive a 500 internal error. Checking the _internal logs I see this: File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 655, in simpleRequest raise splunk.ResourceNotFound(uri) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/services/apps/remote/entriesbyid/SplunkAdmins I am on 9.0.3. I don't have a proxy setup. And all my file permissions are fine. Hope someone can help on this one. Thanks.
Hi Experts,  While adding below query to my dashboard i am getting error saying     |eval Category=case (Ratings>"8","Promoter",Ratings > "7","Detractor",Ratings > "6" AND Ratings < "9", "Passi... See more...
Hi Experts,  While adding below query to my dashboard i am getting error saying     |eval Category=case (Ratings>"8","Promoter",Ratings > "7","Detractor",Ratings > "6" AND Ratings < "9", "Passive") Error: Unencoded <   Regards Mayank  
  I have the table below and want to reflect the severity of the duration metric by highlighting the entire row index=... | eval epochtime = strptime(startTime,"%a %m/%d %H:%M %Y") | eval start =... See more...
  I have the table below and want to reflect the severity of the duration metric by highlighting the entire row index=... | eval epochtime = strptime(startTime,"%a %m/%d %H:%M %Y") | eval start = strftime(epochtime,"%a %d/%m/%Y %H:%M") | eval duration = tostring(round(now()-epochtime), "duration") | table time user client start duration    Is it possible to highlight or outline the entire row in red if the duration > 08:00
Hi, new SPLUNKER. I'm trying to download the FREE ENTERPRISE software for studying purposes. I've tried to download in regular and incognito mode, but the "ACCESS PROGRAM" button stays greyed out. ... See more...
Hi, new SPLUNKER. I'm trying to download the FREE ENTERPRISE software for studying purposes. I've tried to download in regular and incognito mode, but the "ACCESS PROGRAM" button stays greyed out. I've "read" the EULA and there is no check box for this.  Am I missing something? Thx,
we are using the tabs extension: tabs.js, tabs.css (from https://github.com/LukeMurphey/splunk-dashboard-tabs-example) today we upgraded our splunk from 8 to 9 and after all tabs are not working , ... See more...
we are using the tabs extension: tabs.js, tabs.css (from https://github.com/LukeMurphey/splunk-dashboard-tabs-example) today we upgraded our splunk from 8 to 9 and after all tabs are not working , they are visible but nothing happen when we click the tab name. I have tried: 1. updating the code to the one in the git link above 2. also solution in https://community.splunk.com/t5/Dashboards-Visualizations/Tabs-in-Splunk-dashboard-not-working-after-Splunk-7-3-upgrade/m-p/482436#M31634 any help would be appreciated 
Dear All  I want monitor Linux device via syslog method when i go to input data  not found Tcp & udp option  Now to add this input option  ? As    
When I use walklex on my indexes, it doesn't appear to be following the time specifications very well. Does anybody know what is/might be happening here? Command: | walklex index=indexName type=f... See more...
When I use walklex on my indexes, it doesn't appear to be following the time specifications very well. Does anybody know what is/might be happening here? Command: | walklex index=indexName type=field | stats count by field Examples for an index:  Index 1: * The buckets generally take about 6 hours to roll from hot to warm. * When I select last 24 hours, I get results from above query like I would expect with a bit of overflow due to the bucket time span, but then there is a couple week gap with some events returned from several weeks prior. Index 2: * Some buckets have upwards of 2 years time span. * When I run walklex over the last 7 days, I get results all the way back to 2017. When I look for the bucket ID and guId of the bucket containing the old results using dbinspect over a 14 day time range, I do not see that local ID combined with the guId. But when I look at all time I find the guId and local ID pair. But the bucket shows as being hot and last edited in January of 2020... which all of the other weird behavior set aside, walklex shouldn't be getting data from hot buckets unless the docs are wrong?
Hello all, Recently I've had to move our current index DB to a new location to free up some storage space. I followed the documentation outlined in: https://docs.splunk.com/Documentation/Splunk/9.... See more...
Hello all, Recently I've had to move our current index DB to a new location to free up some storage space. I followed the documentation outlined in: https://docs.splunk.com/Documentation/Splunk/9.0.3/Indexer/Moveanindex and everything is working fine with exception of the built-in Monitoring Console app. When loading up the resource usage web page for the instance it just appears empty. I tried to narrow down the searches itself and when running the search is just seems that all the dmc macros (dmc_*) aren't working, but if you run the conents of the macro instead of calling the macro it works as expected. Anyone knows why this is happening and the best way to go about fixing it? After DB move
All,  I am working on a App with some custom command and it requires me to start a bit. Is there a way to speed up the restarts? Right now it's about 45 seconds. (8 core, 64gig, m2) with latest Splu... See more...
All,  I am working on a App with some custom command and it requires me to start a bit. Is there a way to speed up the restarts? Right now it's about 45 seconds. (8 core, 64gig, m2) with latest Splunk Container.    Right now I have shell into the Enterprise container and I just sudo restart the splunk bin.  Maybe? ... - Maybe remove some apps? If so which are safe to remove that are packaged with Splunk without causing issues?  - Can I get rid of the splunk web login and log directly in as admin?    Any ideas?  thanks! -Daniel  
Hi  My sources: 1.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC.log 2.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-show.log 3.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-ignored-sms.log 4.... See more...
Hi  My sources: 1.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC.log 2.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-show.log 3.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-ignored-sms.log 4.  /app/splunkser/ShiftMinJMC/ShiftMinJMC.log 5.  /app/splunkser/ShiftMinJMC/ShiftMinJMC-show.log 6.  /app/splunkser/ShiftMinJMC/ShiftMinJMC-ignored-sms.log 7.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC.log 8.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-show.log 9.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-ignored-sms.log I am receive the data from the above sources in SIT  and PROD environment but not receiving  logs from the below sources: 1.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC.log 4.  /app/splunkser/ShiftMinJMC/ShiftMinJMC.log 7.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC.log Note: i am getting logs in SIT from all 9 sources but in production the mentioned 1, 4 and 7th sources are not showing up in Production env. Inputs.conf [monitor:///app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-show-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-ignored-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftMinJMC/ShiftMinJMC-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftMinJMC/ShiftMinJMC-show-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftMinJMC/ShiftMinJMC-ignored-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-show-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-ignored-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> Props.conf [app:jmcshift:logs] TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=30 LINE_BREAKER=([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3} SHOULD_LINEMERGE=false TRUNCATE=99999 Sample logs: From all 9 sources the events starts with date as shown below: 2023-01-12 23:24:50.245 [error]........................................... Same inputs.cong and props.conf  in SIT and Production env. Not sure what could be the issue.
| loadjob savedsearch="nobody:splunk_fcr_evo:monitoring" | table adt, FLOW, Date, NbRecordsOKFCR, Total, NbRecords, NBFile, NA1, NA2, NA3, CM, Alert | where match(FLOW, "$Flow_token$") and match(ad... See more...
| loadjob savedsearch="nobody:splunk_fcr_evo:monitoring" | table adt, FLOW, Date, NbRecordsOKFCR, Total, NbRecords, NBFile, NA1, NA2, NA3, CM, Alert | where match(FLOW, "$Flow_token$") and match(adt, "$adt_token$") $filter_green_lights$ | fields adt FLOW Date NA1, NA2, NA3,CM, "Total" | sort adt, Date
Monitoring & Alerting for noise in an audio file? Hi, I am currently having a spy audio recorder for my daughter kindergarten, since there’s been an increase of violence on the news lately. The re... See more...
Monitoring & Alerting for noise in an audio file? Hi, I am currently having a spy audio recorder for my daughter kindergarten, since there’s been an increase of violence on the news lately. The recorder generates an .WAV file, that can last for more than 24h a day. My question is, does Splunk has the ability to upload such a file, and identify “events” based on a condition that could then be alerts? When you view an audio file, there’s that “line” that moves up and down depends on the sound volume, right? I know that hearing a specific word might not be possible, but maybe when the staff screams and that volume line is at very top height, so generate an alert? Or when there is a continuous “high” line because of high volume that could hint on a crying. This is very important for me, would appreciate ideas. Thank you!