All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

"Do mvexpand to split it into separate results. Then do spath" Need more detail please Is there a way to see what the mvexpand returns? feels like debugging queries is next to impossible when spath... See more...
"Do mvexpand to split it into separate results. Then do spath" Need more detail please Is there a way to see what the mvexpand returns? feels like debugging queries is next to impossible when spath-ing the mv results what exactly am inputting for? index="factory_mtp_events" | spath "logs{}" output=logs | mvexpand logs | spath input=logs.test_name|  
We are running 9.1.2
I am using below query for comparing todays, yesterday and 8days before data, when i use timechart command the timewrap works but when i use on stats I get 2 rows of data where as there will be multi... See more...
I am using below query for comparing todays, yesterday and 8days before data, when i use timechart command the timewrap works but when i use on stats I get 2 rows of data where as there will be multiple other URLs to compare, is it possible to compare it with stats? otherwise with timechart it creates a lots of colums with url avg and counts. <query> URL=* [| makeresults | addinfo | eval row=mvrange(0,3) | mvexpand row | eval row=if(row=2,8,row) | eval earliest=relative_time(info_min_time,"-".row."d") | eval latest=relative_time(info_max_time,"-".row."d") | table earliest latest] | eval URL=replace(URL,"/*\d+","/{id}") | bucket _time span=15m | stats avg(responseTime) count by URL _time| sort -_time URL | timewrap d
Thank you, it works now. I am going to monitor for one more day before I mark your response as accepted solution.   But in the meanwhile, could you kindly explain how the below lines work please? ... See more...
Thank you, it works now. I am going to monitor for one more day before I mark your response as accepted solution.   But in the meanwhile, could you kindly explain how the below lines work please? | eval {Status}=Status | fields - Status | stats values(*) as * | eval Status=coalesce(FILE_DELIVERED, FILE_NOT_DELIVERED) | fields Status  I started guessing/ play with it, but certain lines I am unable to understand what it does/ how it fits here to provide me the desired result TBH.
Hi All, I have a scripted input which gets Data from a URL and send it to Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is as shown below: ##### BEGIN STAT... See more...
Hi All, I have a scripted input which gets Data from a URL and send it to Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is as shown below: ##### BEGIN STATUS ##### #LAST UPDATE  :  Tue,  28  Nov  2023  11:00:16  +0000 Abcstatus.status=ok Abcstatus.lastupdate=17xxxxxxxx555     ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### XXXX' XXXX   ###  xyxStatus  ### XXX XXX XXX . . . . So on....   But in splunk below lines are coming as a seperate events instead of being part of one complete event: ##### FIRST STATUS #####  - is coming as seperate event Abcstatus.status=ok  - this is also coming as a separate event   Below all events coming as one event which is correct and the above two lines should also be part of this one event: Abcstatus.lastupdate=17xxxxxxxx555 ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### . . . So on.... #####   END STATUS  #####   Below is my props: DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE=TRUE BREAK_ONLY_AFTER = ^#{5}\s{6}END\sSTATUS\s{6}\#{5} MUST_NOT_BREAK_AFTER=\#{5}\s{5}BEGIN\sSTATUS\s{5}\#{5} TIME_PREFIX=^#\w+\s\w+\w+\s:\s MAX_TIMESTAMP_LOOKAHEAD=200   Can you please help me with the issue?  
Since you mentioned copying a query into db_inputs.conf, ensure that the database configuration, including connection parameters and query settings, is accurate. Double-check the syntax and make su... See more...
Since you mentioned copying a query into db_inputs.conf, ensure that the database configuration, including connection parameters and query settings, is accurate. Double-check the syntax and make sure there are no typos or errors in the configuration. PayByPlateMa com Online Toll Bills If It helps, Give Karma will be Appreciable. 
Hello,  I'm trying to create a RAG dashboard that will show different colours should an issue occur with a service e.g. if a service stops working the stat would show as one and the colour would tu... See more...
Hello,  I'm trying to create a RAG dashboard that will show different colours should an issue occur with a service e.g. if a service stops working the stat would show as one and the colour would turn red, I can do this but what I am struggling with is combining multiple index searches into one overall stat e.g. index "windows_perfmon" disk runs out of space, stat increases to 1, a winhostmon index service stops and that stat increases to one, I'm struggling to combine these into one overall stat which would be 2 in this example.  The current search I am using is:  (index=winhostmon host="Splunktest" "Type=Service" sourcetype=WinHostMon DisplayName="Print Spooler" OR DisplayName="Snow Inventory Agent" StartMode="Auto" State="Stopped") OR (index="windows_perfmon" host="Splunktest" object="LogicalDisk" counter="% Free Space" OR counter="Free Megabytes") | eval diskInfoA = if(counter=="% Free Space",mvzip(instance,Value),null()) | eval diskInfoA1 = if(isnotnull(diskInfoA),mvzip(diskInfoA,counter),null()) | eval diskInfoB = if(counter=="Free Megabytes",mvzip(instance,Value),null()) | eval diskInfoB1 = if(isnotnull(diskInfoB),mvzip(diskInfoB,counter),null()) | stats list(diskInfoA1) AS "diskInfoA1", list(diskInfoB1) AS "diskInfoB1" by host, instance, _time | makemv diskInfoA1 delim="," | makemv diskInfoB1 delim="," | eval freePerc = mvindex(diskInfoA1,1) | eval freeMB = mvindex(diskInfoB1,1) | eval usage=round(100-freePerc,2) | eval GB = round(freeMB/1024,2) | eval totalDiskGB = GB/(freePerc/100) | stats max(usage) AS "Disk Usage", max(GB) AS "Disk Free", max(totalDiskGB) AS "Total Disk Size (GB)" by host instance | where not instance="_Total" | where NOT LIKE(instance,"%Hard%") | search "Disk Usage" >90 | stats count The result I get is just count=1  Note in the above example I have stopped the print spooler on the server so the event count should be 2 as there is a disk that is also running above 90% I have also tried the append version but again I cannot get it to combine the results. index=winhostmon host="Splunktest" "Type=Service" sourcetype=WinHostMon DisplayName="Print Spooler" OR DisplayName="Snow Inventory Agent" StartMode="Auto" State="Stopped" | stats count|rename count as Service |append [ search index="windows_perfmon" host="Splunktest" object="LogicalDisk" counter="% Free Space" OR counter="Free Megabytes" | eval diskInfoA = if(counter=="% Free Space",mvzip(instance,Value),null()) | eval diskInfoA1 = if(isnotnull(diskInfoA),mvzip(diskInfoA,counter),null()) | eval diskInfoB = if(counter=="Free Megabytes",mvzip(instance,Value),null()) | eval diskInfoB1 = if(isnotnull(diskInfoB),mvzip(diskInfoB,counter),null()) | stats list(diskInfoA1) AS "diskInfoA1", list(diskInfoB1) AS "diskInfoB1" by host, instance, _time | makemv diskInfoA1 delim="," | makemv diskInfoB1 delim="," | eval freePerc = mvindex(diskInfoA1,1) | eval freeMB = mvindex(diskInfoB1,1) | eval usage=round(100-freePerc,2) | eval GB = round(freeMB/1024,2) | eval totalDiskGB = GB/(freePerc/100) | stats max(usage) AS "Disk Usage", max(GB) AS "Disk Free", max(totalDiskGB) AS "Total Disk Size (GB)" by host instance | where not instance="_Total" | where NOT LIKE(instance,"%Hard%") | search "Disk Usage" >90 | stats count|rename count as Disk ] The end goal of this is to just show one stat on a dashboard and when you click on that number it opens another dashboard that shows you the detail.    Any help would be appreciated. 
I found that the solution to this is using the syslog-ng syslog() driver instead of the network() driver as your source. Here is an example configuration: source "s_fortigate" { syslog( tran... See more...
I found that the solution to this is using the syslog-ng syslog() driver instead of the network() driver as your source. Here is an example configuration: source "s_fortigate" { syslog( transport("tcp") port(1514) ); }; template t_msgheader_msg { template("$MSGHDR $MSG\n"); }; destination "d_fortigate" { file("/data/syslog/fortigate.log" owner(splunk) group(splunk) dir_owner(splunk) dir_group(splunk) create_dirs(yes) dir_perm(0770) perm(0660) template(t_msgheader_msg) ); }; log { source("s_fortigate"); destination("d_fortigate"); }; @muradgh @idiota 
What version of splunk are you running? We're still on 8, but upgrading to 9 soon™.
1. The brackets are just part of field's name. Nothing more, nothing less. 2. Working with regex over structured data is... risky. 3. Extract the "logs" part. You should get a multivalued field of ... See more...
1. The brackets are just part of field's name. Nothing more, nothing less. 2. Working with regex over structured data is... risky. 3. Extract the "logs" part. You should get a multivalued field of json-formatted objects. Do mvexpand to split it into separate results. Then do spath. Otherwise you'd just get huge multivalued blobs of data - Splunk doesn't play the "json structure" game so if you just flatten your json, you'll get all values of "the same" field compressed into a single multivalued field.
As an added note to this forum I am having the same exact issue. Interestingly enough, like you I had HEADER_FIELD_LINE_NUMBER set(in my case it is set to 1). Been ingesting the files for a couple of... See more...
As an added note to this forum I am having the same exact issue. Interestingly enough, like you I had HEADER_FIELD_LINE_NUMBER set(in my case it is set to 1). Been ingesting the files for a couple of days and for some reason today it just now started ingesting the headers as an event, though nothing has changed on my end.
It is not failing since we haven’t define any sourcetype or source in search, we hardcoded the payload, however it is working properly giving the exact same headings required. The problem is how it w... See more...
It is not failing since we haven’t define any sourcetype or source in search, we hardcoded the payload, however it is working properly giving the exact same headings required. The problem is how it will extract this information from given sourcetype or source.
Using SPL and Splunk Search, I would like to search the logs array for each separate test_name and results and create a table with the results my current query looks something like: index="factor... See more...
Using SPL and Splunk Search, I would like to search the logs array for each separate test_name and results and create a table with the results my current query looks something like: index="factory_mtp_events" | spath logs{}.test_name | search "logs{}.test_name"="Sample Test1" { logs: [ { result: Pass test_name: Sample Test1 { result: Pass test_name: Sample Test2 } { received: 4 result: Pass test_name: Sample Test3 } { expected: sample received: sample result: Pass test_name: Sample Test4 } { expected: 1 A S received: 1 A S result: Pass test_name: Sample Test5 } { expected: 1 reason: Sample Reason received: 1 result: Pass test_name: Sample Test6 } { pt1: 25000 pt1_recieved: 25012.666666666668 pt2: 20000 pt2_recieved: 25015.333333333332 pt3: 15000 pt3_recieved: 25017.0 result: Fail test_name: Sample Test7 } { result: Pass test_name: Sample Test8 tolerance: + or - 5 C recieved_cj: 239 user_temp: 250 } { expected: Open, Short, and Load verified OK. pt1: 2 pt1_recieved: 0 pt2: 1 pt2_received: 0 result: Fail test_name: Sample Test9 } { pt1: 2070 pt1_tolerance: 2070 pt1_received: 540 pt2: 5450 pt2_tolerance: 2800 pt2_received: 538 result: Fail test_name: Sample Test10 } { expected: Soft Start verified by operator received: Soft Start verified result: Pass test_name: Sample Test11 } { F_name: AUGER 320 F F_rpm: 1475 F_rpm_t: 150 F_rpm_received: 1500 F_v: 182 F_v_t: 160 F_v_received: 173 R_name: AUGER 320 R R_rpm: 1475 R_rpm_t: 150 R_rpm_received: 1450 R_v: 155 R_v_t: 160 R_v_ugc: 154.66666666666666 result: Pass test_name: Sample Test12 } { result: Pass rpm: 2130 rpm_t: 400 test_name: Sample Test13 received_rpm: 2126.6666666666665 received_v: 615.6666666666666 v: 630 v_t: 160 } ] result: Fail serial_number: XXXXXXXXXXXsample type: Test What is the purpose of the brackets after logs? I assume regex must be used to get the result from each test? How do I pull results from each test into a table containing the results of every separate log? I would like the table for each test to look something like: ** Sample Test1** Expected Actual Serial No. X X XXXXXXXsample Y Z XXXXXX2sample
Issue has been resolved after we provide admin permission to all the respective knowledge objects of this addon , like saved searches , lookups . I do not see this error any more 
Hello, The update installation of the forwarder is not running and a roll-back is being performed. During further tests I have activated the verbose logging of the MSI installer. The following error... See more...
Hello, The update installation of the forwarder is not running and a roll-back is being performed. During further tests I have activated the verbose logging of the MSI installer. The following error occurs here: Cannot set GROUPPERFORMANCEMONITORUSERS=1 since the local users/groups are not available on Domain Controller. He is probably right. But why does the installer try to set this parameter at all during an update installation? Unfortunately, I cannot set any further options here during an update. There is clearly a bug in the installer script. Any Ideas? Log: Action 14:44:08: SetAccountTypeData. Action start 14:44:08: SetAccountTypeData. MSI (s) (A0:60) [14:44:08:562]: PROPERTY CHANGE: Adding SetAccountType property. Its value is 'UseVirtualAccount=;UseLocalSystem=0;UserName=D3622070\SIEM-EVNT-READER;FailCA='. Action ended 14:44:08: SetAccountTypeData. Return value 1. MSI (s) (A0:60) [14:44:08:562]: Doing action: SetAccountType MSI (s) (A0:60) [14:44:08:562]: Note: 1: 2205 2: 3: ActionText Action 14:44:08: SetAccountType. Action start 14:44:08: SetAccountType. MSI (s) (A0:F0) [14:44:08:562]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIAC9D.tmp, Entrypoint: SetAccountTypeCA SetAccountType: Error 0x80004005: Cannot set GROUPPERFORMANCEMONITORUSERS=1 since the local users/groups are not available on Domain Controller. CustomAction SetAccountType returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 14:44:08: SetAccountType. Return value 3. MSI (s) (A0:60) [14:44:08:594]: Note: 1: 2265 2: 3: -2147287035 MSI (s) (A0:60) [14:44:08:594]: User policy value 'DisableRollback' is 0 MSI (s) (A0:60) [14:44:08:594]: Machine policy value 'DisableRollback' is 0 MSI (s) (A0:60) [14:44:08:594]: Note: 1: 2318 2: MSI (s) (A0:60) [14:44:08:609]: Executing op: Header(Signature=1397708873,Version=500,Timestamp=1467512195,LangId=1033,Platform=589824,ScriptType=2,ScriptMajorVersion=21,ScriptMinorVersion=4,ScriptAttributes=1) MSI (s) (A0:60) [14:44:08:609]: Executing op: DialogInfo(Type=0,Argument=1033) MSI (s) (A0:60) [14:44:08:609]: Executing op: DialogInfo(Type=1,Argument=UniversalForwarder) MSI (s) (A0:60) [14:44:08:609]: Executing op: RollbackInfo(,RollbackAction=Rollback,RollbackDescription=Rolling back action:,RollbackTemplate=[1],CleanupAction=RollbackCleanup,CleanupDescription=Removing backup files,CleanupTemplate=File: [1]) MSI (s) (A0:60) [14:44:08:609]: Executing op: RegisterBackupFile(File=C:\Config.Msi\b08a3953.rbf)  
I'm in the phase of the migration of the splunk component, where i need to migrate some add-ons from older sh and move it to new SH, coping the TA_crowdstrike-devices from old server to new server wi... See more...
I'm in the phase of the migration of the splunk component, where i need to migrate some add-ons from older sh and move it to new SH, coping the TA_crowdstrike-devices from old server to new server will this work?
Your value (3.17) isn't covered by any condition in your case function which is why you don't have a grade. What are the ranges for your grades?
Assistance with Custom Attribute Retrieval in VMware App for Splunk Hello everyone, I'm currently working on integrating custom attributes from VMware into Splunk ITSI using the Splunk Add-on for V... See more...
Assistance with Custom Attribute Retrieval in VMware App for Splunk Hello everyone, I'm currently working on integrating custom attributes from VMware into Splunk ITSI using the Splunk Add-on for VMware Metrics. The standard functionality of the add-on doesn't seem to support custom attribute retrieval out-of-the-box and I need a "TechnicalService" custom attribute for an entity split. To tackle this, I am looking into extending the existing add-on scripts, particularly focusing on the CustomFieldsManager within the VMware API. I've explored the inventory_handlers.py, inventory.py, and mo.py scripts within the add-on and found a potential pathway through the CustomFieldsManager class in mo.py. However, I'd appreciate any insights or guidance from those who have tackled similar integrations or extensions. Specifically, I'm interested in: Best practices for extending the add-on to include custom attributes without affecting its core functionalities. Any known challenges or pitfalls in modifying these scripts for custom attribute integration. Advice on testing and validation to ensure stable and efficient operation post-modification. I'm open to suggestions, alternative approaches, or any resources that might be helpful in this context. Thank you in advance for your assistance and insights! kind regards, Charly
@ITWhisperer   I want display the Grade based on the avg GPA and the below condition is not giving the result | eval Grade=case(GPA=1,"D", GPA>1 AND GPA<=1.3,"D+")]    
That worked! Thank you for the help!