All Topics

Top

All Topics

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2308! Analysts can benefit from: Updates to the data sets UI to improve the overall usability and acce... See more...
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2308! Analysts can benefit from: Updates to the data sets UI to improve the overall usability and accessibility  Dashboard Studio improvements:  New ability to conditionally show or hide panels in Grid layout New option to select Dashboard Studio when saving reports to dashboards Admins can benefit from: Zero downtime upgrades for SHC Victoria stacks, allowing search continuity for short and long-running searches during upgrades and rolling restarts New Workload Management rules with a predicate search_time_range to reduce the impact of searches over large amounts of data Decentralized search telemetry collection, increasing efficiency, integrity (completeness), and reliability of search telemetry while freeing up Search Head capacity Improved role based security for every search using Access Control List (ACL)  An update to Splunk Secure Gateway to allow turning on/off mobile notifications for Alerts and Reports and resizing of the SSG opt-in window enabling mobile users to opt-in from their mobile device during login Private Connectivity now available for Splunk Cloud Platform search capabilities and UI access over private endpoints through AWS PrivateLink for PCI, HIPAA, IRAP, and GovCloud offerings Check out the full release notes for more details. Python 2 is in the process of deprecation and soon will no longer be available in coming releases. jQuery v3.5 library is now set as the platform default; prior jQuery libraries are no longer supported.
Hello, I am looking to pass in a list of devices into an enrichment playbook but the issue I have is that the input playbook takes in one device at time and returns a JSON object of details related ... See more...
Hello, I am looking to pass in a list of devices into an enrichment playbook but the issue I have is that the input playbook takes in one device at time and returns a JSON object of details related to that device. I then want to add each result into a JSON object. How can I achieve this in the most efficient way?
Hello,  I would like to create a table in a Dashboard that includes either the Baseline metrics or an Avg for a different time period.  i.e. if the Table is showing for the last 1 week, I would like ... See more...
Hello,  I would like to create a table in a Dashboard that includes either the Baseline metrics or an Avg for a different time period.  i.e. if the Table is showing for the last 1 week, I would like to see the Avg of the previous week as well. Business Transaction Name  |  Avg. Response Time | Avg. Response Time Baseline Also, is there any way to set Thresholds for Status Colors on tables? My goal is, I need to create a weekly Schedule Dashboard and from the options I'm finding that AppD can do, it's very limited.  Any ideas given would be greatly appreciated. Thanks for the help, Tom
Hi  I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster.   The issue I am ha... See more...
Hi  I am working on a query to determine the hourly (or daily) totals of all indexed data (in GBs) coming from UFs. In our deployment, UFs send directly to the Indexer Cluster.   The issue I am having w/ the following query, is that the volume is not realistic, and I am probably misunderstanding the _internal metrics log.  Perhaps the kb field is not the correct field to sum as data thruput?     index=_internal source=*metrics.log group=tcpin_connections fwdType=uf | eval GB = kb/(1024*1024) | stats sum(GB) as GB       Any advice appreciated. Thank you
I am using below query for comparing todays, yesterday and 8days before data, when i use timechart command the timewrap works but when i use on stats I get 2 rows of data where as there will be multi... See more...
I am using below query for comparing todays, yesterday and 8days before data, when i use timechart command the timewrap works but when i use on stats I get 2 rows of data where as there will be multiple other URLs to compare, is it possible to compare it with stats? otherwise with timechart it creates a lots of colums with url avg and counts. <query> URL=* [| makeresults | addinfo | eval row=mvrange(0,3) | mvexpand row | eval row=if(row=2,8,row) | eval earliest=relative_time(info_min_time,"-".row."d") | eval latest=relative_time(info_max_time,"-".row."d") | table earliest latest] | eval URL=replace(URL,"/*\d+","/{id}") | bucket _time span=15m | stats avg(responseTime) count by URL _time| sort -_time URL | timewrap d
Hi All, I have a scripted input which gets Data from a URL and send it to Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is as shown below: ##### BEGIN STAT... See more...
Hi All, I have a scripted input which gets Data from a URL and send it to Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is as shown below: ##### BEGIN STATUS ##### #LAST UPDATE  :  Tue,  28  Nov  2023  11:00:16  +0000 Abcstatus.status=ok Abcstatus.lastupdate=17xxxxxxxx555     ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### XXXX' XXXX   ###  xyxStatus  ### XXX XXX XXX . . . . So on....   But in splunk below lines are coming as a seperate events instead of being part of one complete event: ##### FIRST STATUS #####  - is coming as seperate event Abcstatus.status=ok  - this is also coming as a separate event   Below all events coming as one event which is correct and the above two lines should also be part of this one event: Abcstatus.lastupdate=17xxxxxxxx555 ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### . . . So on.... #####   END STATUS  #####   Below is my props: DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE=TRUE BREAK_ONLY_AFTER = ^#{5}\s{6}END\sSTATUS\s{6}\#{5} MUST_NOT_BREAK_AFTER=\#{5}\s{5}BEGIN\sSTATUS\s{5}\#{5} TIME_PREFIX=^#\w+\s\w+\w+\s:\s MAX_TIMESTAMP_LOOKAHEAD=200   Can you please help me with the issue?  
Hello,  I'm trying to create a RAG dashboard that will show different colours should an issue occur with a service e.g. if a service stops working the stat would show as one and the colour would tu... See more...
Hello,  I'm trying to create a RAG dashboard that will show different colours should an issue occur with a service e.g. if a service stops working the stat would show as one and the colour would turn red, I can do this but what I am struggling with is combining multiple index searches into one overall stat e.g. index "windows_perfmon" disk runs out of space, stat increases to 1, a winhostmon index service stops and that stat increases to one, I'm struggling to combine these into one overall stat which would be 2 in this example.  The current search I am using is:  (index=winhostmon host="Splunktest" "Type=Service" sourcetype=WinHostMon DisplayName="Print Spooler" OR DisplayName="Snow Inventory Agent" StartMode="Auto" State="Stopped") OR (index="windows_perfmon" host="Splunktest" object="LogicalDisk" counter="% Free Space" OR counter="Free Megabytes") | eval diskInfoA = if(counter=="% Free Space",mvzip(instance,Value),null()) | eval diskInfoA1 = if(isnotnull(diskInfoA),mvzip(diskInfoA,counter),null()) | eval diskInfoB = if(counter=="Free Megabytes",mvzip(instance,Value),null()) | eval diskInfoB1 = if(isnotnull(diskInfoB),mvzip(diskInfoB,counter),null()) | stats list(diskInfoA1) AS "diskInfoA1", list(diskInfoB1) AS "diskInfoB1" by host, instance, _time | makemv diskInfoA1 delim="," | makemv diskInfoB1 delim="," | eval freePerc = mvindex(diskInfoA1,1) | eval freeMB = mvindex(diskInfoB1,1) | eval usage=round(100-freePerc,2) | eval GB = round(freeMB/1024,2) | eval totalDiskGB = GB/(freePerc/100) | stats max(usage) AS "Disk Usage", max(GB) AS "Disk Free", max(totalDiskGB) AS "Total Disk Size (GB)" by host instance | where not instance="_Total" | where NOT LIKE(instance,"%Hard%") | search "Disk Usage" >90 | stats count The result I get is just count=1  Note in the above example I have stopped the print spooler on the server so the event count should be 2 as there is a disk that is also running above 90% I have also tried the append version but again I cannot get it to combine the results. index=winhostmon host="Splunktest" "Type=Service" sourcetype=WinHostMon DisplayName="Print Spooler" OR DisplayName="Snow Inventory Agent" StartMode="Auto" State="Stopped" | stats count|rename count as Service |append [ search index="windows_perfmon" host="Splunktest" object="LogicalDisk" counter="% Free Space" OR counter="Free Megabytes" | eval diskInfoA = if(counter=="% Free Space",mvzip(instance,Value),null()) | eval diskInfoA1 = if(isnotnull(diskInfoA),mvzip(diskInfoA,counter),null()) | eval diskInfoB = if(counter=="Free Megabytes",mvzip(instance,Value),null()) | eval diskInfoB1 = if(isnotnull(diskInfoB),mvzip(diskInfoB,counter),null()) | stats list(diskInfoA1) AS "diskInfoA1", list(diskInfoB1) AS "diskInfoB1" by host, instance, _time | makemv diskInfoA1 delim="," | makemv diskInfoB1 delim="," | eval freePerc = mvindex(diskInfoA1,1) | eval freeMB = mvindex(diskInfoB1,1) | eval usage=round(100-freePerc,2) | eval GB = round(freeMB/1024,2) | eval totalDiskGB = GB/(freePerc/100) | stats max(usage) AS "Disk Usage", max(GB) AS "Disk Free", max(totalDiskGB) AS "Total Disk Size (GB)" by host instance | where not instance="_Total" | where NOT LIKE(instance,"%Hard%") | search "Disk Usage" >90 | stats count|rename count as Disk ] The end goal of this is to just show one stat on a dashboard and when you click on that number it opens another dashboard that shows you the detail.    Any help would be appreciated. 
Using SPL and Splunk Search, I would like to search the logs array for each separate test_name and results and create a table with the results my current query looks something like: index="factor... See more...
Using SPL and Splunk Search, I would like to search the logs array for each separate test_name and results and create a table with the results my current query looks something like: index="factory_mtp_events" | spath logs{}.test_name | search "logs{}.test_name"="Sample Test1" { logs: [ { result: Pass test_name: Sample Test1 { result: Pass test_name: Sample Test2 } { received: 4 result: Pass test_name: Sample Test3 } { expected: sample received: sample result: Pass test_name: Sample Test4 } { expected: 1 A S received: 1 A S result: Pass test_name: Sample Test5 } { expected: 1 reason: Sample Reason received: 1 result: Pass test_name: Sample Test6 } { pt1: 25000 pt1_recieved: 25012.666666666668 pt2: 20000 pt2_recieved: 25015.333333333332 pt3: 15000 pt3_recieved: 25017.0 result: Fail test_name: Sample Test7 } { result: Pass test_name: Sample Test8 tolerance: + or - 5 C recieved_cj: 239 user_temp: 250 } { expected: Open, Short, and Load verified OK. pt1: 2 pt1_recieved: 0 pt2: 1 pt2_received: 0 result: Fail test_name: Sample Test9 } { pt1: 2070 pt1_tolerance: 2070 pt1_received: 540 pt2: 5450 pt2_tolerance: 2800 pt2_received: 538 result: Fail test_name: Sample Test10 } { expected: Soft Start verified by operator received: Soft Start verified result: Pass test_name: Sample Test11 } { F_name: AUGER 320 F F_rpm: 1475 F_rpm_t: 150 F_rpm_received: 1500 F_v: 182 F_v_t: 160 F_v_received: 173 R_name: AUGER 320 R R_rpm: 1475 R_rpm_t: 150 R_rpm_received: 1450 R_v: 155 R_v_t: 160 R_v_ugc: 154.66666666666666 result: Pass test_name: Sample Test12 } { result: Pass rpm: 2130 rpm_t: 400 test_name: Sample Test13 received_rpm: 2126.6666666666665 received_v: 615.6666666666666 v: 630 v_t: 160 } ] result: Fail serial_number: XXXXXXXXXXXsample type: Test What is the purpose of the brackets after logs? I assume regex must be used to get the result from each test? How do I pull results from each test into a table containing the results of every separate log? I would like the table for each test to look something like: ** Sample Test1** Expected Actual Serial No. X X XXXXXXXsample Y Z XXXXXX2sample
Hello, The update installation of the forwarder is not running and a roll-back is being performed. During further tests I have activated the verbose logging of the MSI installer. The following error... See more...
Hello, The update installation of the forwarder is not running and a roll-back is being performed. During further tests I have activated the verbose logging of the MSI installer. The following error occurs here: Cannot set GROUPPERFORMANCEMONITORUSERS=1 since the local users/groups are not available on Domain Controller. He is probably right. But why does the installer try to set this parameter at all during an update installation? Unfortunately, I cannot set any further options here during an update. There is clearly a bug in the installer script. Any Ideas? Log: Action 14:44:08: SetAccountTypeData. Action start 14:44:08: SetAccountTypeData. MSI (s) (A0:60) [14:44:08:562]: PROPERTY CHANGE: Adding SetAccountType property. Its value is 'UseVirtualAccount=;UseLocalSystem=0;UserName=D3622070\SIEM-EVNT-READER;FailCA='. Action ended 14:44:08: SetAccountTypeData. Return value 1. MSI (s) (A0:60) [14:44:08:562]: Doing action: SetAccountType MSI (s) (A0:60) [14:44:08:562]: Note: 1: 2205 2: 3: ActionText Action 14:44:08: SetAccountType. Action start 14:44:08: SetAccountType. MSI (s) (A0:F0) [14:44:08:562]: Invoking remote custom action. DLL: C:\Windows\Installer\MSIAC9D.tmp, Entrypoint: SetAccountTypeCA SetAccountType: Error 0x80004005: Cannot set GROUPPERFORMANCEMONITORUSERS=1 since the local users/groups are not available on Domain Controller. CustomAction SetAccountType returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) Action ended 14:44:08: SetAccountType. Return value 3. MSI (s) (A0:60) [14:44:08:594]: Note: 1: 2265 2: 3: -2147287035 MSI (s) (A0:60) [14:44:08:594]: User policy value 'DisableRollback' is 0 MSI (s) (A0:60) [14:44:08:594]: Machine policy value 'DisableRollback' is 0 MSI (s) (A0:60) [14:44:08:594]: Note: 1: 2318 2: MSI (s) (A0:60) [14:44:08:609]: Executing op: Header(Signature=1397708873,Version=500,Timestamp=1467512195,LangId=1033,Platform=589824,ScriptType=2,ScriptMajorVersion=21,ScriptMinorVersion=4,ScriptAttributes=1) MSI (s) (A0:60) [14:44:08:609]: Executing op: DialogInfo(Type=0,Argument=1033) MSI (s) (A0:60) [14:44:08:609]: Executing op: DialogInfo(Type=1,Argument=UniversalForwarder) MSI (s) (A0:60) [14:44:08:609]: Executing op: RollbackInfo(,RollbackAction=Rollback,RollbackDescription=Rolling back action:,RollbackTemplate=[1],CleanupAction=RollbackCleanup,CleanupDescription=Removing backup files,CleanupTemplate=File: [1]) MSI (s) (A0:60) [14:44:08:609]: Executing op: RegisterBackupFile(File=C:\Config.Msi\b08a3953.rbf)  
I'm in the phase of the migration of the splunk component, where i need to migrate some add-ons from older sh and move it to new SH, coping the TA_crowdstrike-devices from old server to new server wi... See more...
I'm in the phase of the migration of the splunk component, where i need to migrate some add-ons from older sh and move it to new SH, coping the TA_crowdstrike-devices from old server to new server will this work?
Assistance with Custom Attribute Retrieval in VMware App for Splunk Hello everyone, I'm currently working on integrating custom attributes from VMware into Splunk ITSI using the Splunk Add-on for V... See more...
Assistance with Custom Attribute Retrieval in VMware App for Splunk Hello everyone, I'm currently working on integrating custom attributes from VMware into Splunk ITSI using the Splunk Add-on for VMware Metrics. The standard functionality of the add-on doesn't seem to support custom attribute retrieval out-of-the-box and I need a "TechnicalService" custom attribute for an entity split. To tackle this, I am looking into extending the existing add-on scripts, particularly focusing on the CustomFieldsManager within the VMware API. I've explored the inventory_handlers.py, inventory.py, and mo.py scripts within the add-on and found a potential pathway through the CustomFieldsManager class in mo.py. However, I'd appreciate any insights or guidance from those who have tackled similar integrations or extensions. Specifically, I'm interested in: Best practices for extending the add-on to include custom attributes without affecting its core functionalities. Any known challenges or pitfalls in modifying these scripts for custom attribute integration. Advice on testing and validation to ensure stable and efficient operation post-modification. I'm open to suggestions, alternative approaches, or any resources that might be helpful in this context. Thank you in advance for your assistance and insights! kind regards, Charly
I have a table that contains Notable Source events with a drilldown that links to a dashboard of those events. In edit mode the text contained in the table is white. However, when I switch to view mo... See more...
I have a table that contains Notable Source events with a drilldown that links to a dashboard of those events. In edit mode the text contained in the table is white. However, when I switch to view mode the text changes to the default blue hyperlink color. I was wondering if there was a way to change the default color to be white before a user interacts with the table and then change to blue once the user clicks on a field in the table. I saw a post where someone had a similar question and I read the links provided in the solution; however, I'm new to XML so I still didn't really understand after reading the articles. Here is the post I referenced: https://community.splunk.com/t5/Dashboards-Visualizations/is-it-possible-to-apply-a-hex-color-to-change-the-font-of-the/m-p/536262. Any help would be greatly appreciated!
Hi, I want to find the grade based on my Case condition but my query is not working as expected. | eval Grade=case(Cumulative=1,"D", Cumulative>1 AND Cumulative<=1.3,"D+")] Example: My Grade shoul... See more...
Hi, I want to find the grade based on my Case condition but my query is not working as expected. | eval Grade=case(Cumulative=1,"D", Cumulative>1 AND Cumulative<=1.3,"D+")] Example: My Grade should be based on the avg(GPA) If Avg(GPA) is 1 Grade at the bottom (Avg Grade)should be D , If it is between 1-1.3 then it should be D+      
Hello All, I am testing the data inputs for Splunk Addon for ServiceNow and there is a requirement to include only certain fields in the data. I tried to set the filtering using the "Included Param... See more...
Hello All, I am testing the data inputs for Splunk Addon for ServiceNow and there is a requirement to include only certain fields in the data. I tried to set the filtering using the "Included Parameters" option in the input and added the desired comma separated fields. However, I am not able to see those fields. What I see is only the two default id and time fields. I have included the following fields :  dv_active,dv_assignment_group,dv_assigned_to,dv_number,dv_u_resolution_category But in the output I see only below fields: Is there anything that I am doing wrong? Regards, Himani.
Hello community, Below is my sample log file I want to extract each individual piece of event(starting from @ID to REMARK) from the log file. I tried to achieve this by using following regex: (^@I... See more...
Hello community, Below is my sample log file I want to extract each individual piece of event(starting from @ID to REMARK) from the log file. I tried to achieve this by using following regex: (^@ID[\s\S]*?REMARK.*$) This regex is taking the whole log file as single event. Attaching the snapshot below.  Also tried to alter the props.conf by using the same regex: props.conf [t24] SHOULD_LINEMERGE=False LINE_BREAKER=(^@ID[\s\S]*?REMARK.*$) NO_BINARY_CHECK=true disabled=false INDEXED_EXTRACTIONS = csv   LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202309260081340532.21 @ID............ 202309260081340532.21 PROTOCOL.ID.... 202309260081340532.21 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:32:934 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340523.16 @ID............ 202309260081340523.16 PROTOCOL.ID.... 202309260081340523.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT   Attaching the screenshot of the data which I'm getting on Splunk by using the regex mentioned above. Also attaching the snapshot of regex result which i have checked earlier online. I want my data to be shown in table form following is the example snapshot of how I want my data to be appear on Splunk.  
I have this query, where I want to build a dataset from a variable and its 4 previous values. I can solve this like so:       | makeresults | eval id=split("a,b,c,d,e,f,g",",") | eval a=split("1,... See more...
I have this query, where I want to build a dataset from a variable and its 4 previous values. I can solve this like so:       | makeresults | eval id=split("a,b,c,d,e,f,g",",") | eval a=split("1,2,3,4,5,6,7",",") | eval temp=mvzip(id,a,"|") | mvexpand temp | rex field=temp "(?P<id>[^|]+)\|(?P<a>[^|]+)" | fields - temp | streamstats current=false last(a) AS a_lag1 | streamstats current=false last(a_lag1) AS a_lag2 | streamstats current=false last(a_lag2) AS a_lag3 | streamstats current=false last(a_lag3) AS a_lag4 | where isnotnull(a_lag4) | table id a*       However, if I want to extend this to say 100 previous values, this code would become convoluted and slow. I imagine there must be a better way to accomplish this goal, however my research has not produced any alternative. Any ideas are appreciated.
Hi, Our firewalls generate around 1000 High and Critical alerts daily. I would like to create uses related to these notifications but not sure what will be the best way to handle its number. Could s... See more...
Hi, Our firewalls generate around 1000 High and Critical alerts daily. I would like to create uses related to these notifications but not sure what will be the best way to handle its number. Could somebody advise what will be the best way to implement this please?
Hi there, what are the best practices to migrate from Azure sentinel to Splunk, we want to migrate sources, historical data and use cases.
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in ... See more...
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in 2 way: Legacy format (like  the original ones on AD) or XML. Is it possible to see them on JSON format? If yes, we can achieve this directly with above addon or we need other tools?
Hello, I'm implementing Splunk Security Essentials in an environment that already has detection rules, based on the Mitre Att&CK framework. I have correctly entered the datasources in Data Inventor... See more...
Hello, I'm implementing Splunk Security Essentials in an environment that already has detection rules, based on the Mitre Att&CK framework. I have correctly entered the datasources in Data Inventory and indicated them as "Availables". In Content > Custom Content, I added our detection rules by hand. I've specified the Tactics, and the Mitre Techniques and SubTechniques. I've also indicated their status in bookmarking, and some are "Successfully implemented". When I go to Analytics Advisor > Mitre ATT&CK Framework, I see the "Content (Available)" in the MITRE ATT&CK matrix, and it's consistent with our detection rules. But when I select Threat Groups, in "2.Selected Content", in "Total Content Selected", I get zero, whereas detection rules relate to the sub-techniques used by the selected Thread Groups. How can I solve this problem?