All Topics

Top

All Topics

Hi all I have demo Enterprise Security instance  IDX(1), SH(3), FWD(1), master and deployer(1) I got one SHC with SH(3) when Install app "splunk_app_stream" in deployer and deploy to SHC splunk... See more...
Hi all I have demo Enterprise Security instance  IDX(1), SH(3), FWD(1), master and deployer(1) I got one SHC with SH(3) when Install app "splunk_app_stream" in deployer and deploy to SHC splunkd works but splunk web acess not working.. I set this below  master_node -> replication_factor = 1, search_factor = 1 SH_node -> replication_factor = 3  I do not know what this problem is..
I am looking to chart a field that contains a request path but want to display and get a total count of all events that contain the root request path(a) and events that contain the root + <some guid>... See more...
I am looking to chart a field that contains a request path but want to display and get a total count of all events that contain the root request path(a) and events that contain the root + <some guid>/contents.(b) The path is a field I manually extracted called "request_path_in_request" Example of the path I want to combine in the cart: (a)path=/v4/layers/asPlanted (b)path=/v4/layers/asPlanted<some guid>/contents Here is my Splunk query so far: source="partners-api-ol" request_path_in_request="/v*" | timechart count by request_path_in_request useother=f limit=10 And here is how that field is getting charted: Is there a way to show only category of "/v4/layers/asPlanted" , but have the count be the total of all the events with that root path?
Currently, I have postgres system hosted on linux redhat. I have Uinersal Forwarder installed on this postgre system. I am configuring the inputs.conf file as below under /opt/splunk/etc/apps/SplunkF... See more...
Currently, I have postgres system hosted on linux redhat. I have Uinersal Forwarder installed on this postgre system. I am configuring the inputs.conf file as below under /opt/splunk/etc/apps/SplunkForwarder/local/inputs.conf [monitor:///var/lib/pgsql/data/log] disabled=0 crcSalt = <SOURCE> index = pgsql on Postgre, below are log files under /var/lib/pgsql/data/log postgresql-Fri.log postgresql-Mon.log postgres-Sat.log postgres-Tue.log   Issue here: I am not able to see the logs are coming in the above index (pgsql) instead it is coming to main index   Note: I have to use crcSalt = <SOURCE> due to how splunk reads the file based on 256 bytes character otheriwse I would not able to see the logs in any index.  
Hello  Having log like :  <182>Mar 1 18:18:24 SND1 Policy Manager severity=Info saf=1 safd=RACF record=Mar 1 13:17:31 SND1 baspm[67174579]: Compliance Failure='Sensitive Dataset=USS.SND2.VAR resi... See more...
Hello  Having log like :  <182>Mar 1 18:18:24 SND1 Policy Manager severity=Info saf=1 safd=RACF record=Mar 1 13:17:31 SND1 baspm[67174579]: Compliance Failure='Sensitive Dataset=USS.SND2.VAR resides on z/OS shared DASD volume=SN2U01 but is not part of SPM dataset filter=SHRD' [DS33795] i would extract the fields : SND1 as LPAR  field [DS33795] ad DISANUM field 'Sensitive Dataset=USS.SND2.VAR resides on z/OS shared DASD volume=SN2U01 but is not part of SPM dataset filter=SHRD' as DESCRIPTION field  Can you help me writing the regex ? i started to write the following "Compliance Failure" sourcetype="AMI SPM" | rex field=_raw "^(?:[^:\n]*:){2}\d+(?P<LPAR>\s+\w+)(?:[^\[\n]*\[){2}(?P<DISANUM>\w+)" offset_field=_extracted_fields_bounds | stats count by DISANUM but i m not able to get the string after Compliance Failure  into the field DDESCRIPTION Thanks in advance  Maurizio
We are using HCL BigFix and HCL Insights as a data warehouse.  There have been times when the import of data from HCL BigFix to HCL Insights has partially failed with no indication a failure has occu... See more...
We are using HCL BigFix and HCL Insights as a data warehouse.  There have been times when the import of data from HCL BigFix to HCL Insights has partially failed with no indication a failure has occurred.  We would like to verify the HCL Insights data imported into Splunk against the HCL BigFix databases.  Is there a way to run SPL that checks what's in Splunk against an external MS SQL database? I know how to create a db connector and setup a read only account.  But I don't want to import data from the database, just verify the data already in Splunk.     index=patch sourcetype="ibm:bigfix:Patch" | table BigFixDatabasePathTxt ComputerDNSNm ComputerId FixletId FixletIsRelevantInd FixletLastBecameRelevantDtm | join type=inner ComputerId [ | dbxquery query="select BigFixDatabasePathTxt ComputerDNSNm ComputerId FixletId FixletIsRelevantInd FixletLastBecameRelevantDtm from patch where {put SPL output here?}] We'd like the output to only show unmatched data.  
I am using the auto instrumentation for my .net core App (SignalFx Instrumentation) and would like to exclude the traces for requests to static files. How can I exclude the traces from being sent ? ... See more...
I am using the auto instrumentation for my .net core App (SignalFx Instrumentation) and would like to exclude the traces for requests to static files. How can I exclude the traces from being sent ? Thanks
I have the following string: SL=5601%20BLVD%20E%2C%20WESTON%20NEW%20YORK%2C%20NJ%20%2007093%20(WEST%20NEW%20YORK%20TOWN%2C%20HUDSON&f=json&outSR=%7B%22latestWkid%22%3A3857%2C%22wkid%22%3A102100%7D ... See more...
I have the following string: SL=5601%20BLVD%20E%2C%20WESTON%20NEW%20YORK%2C%20NJ%20%2007093%20(WEST%20NEW%20YORK%20TOWN%2C%20HUDSON&f=json&outSR=%7B%22latestWkid%22%3A3857%2C%22wkid%22%3A102100%7D I want to extract the address from this. I have tried regex with %20 and split but nothing works. 
Is there a way to create a line break in the label for the Status Indicator visualization.   I have the following dashboard: <dashboard version="1.1"> <label>Test dashboard</label> <row> <pa... See more...
Is there a way to create a line break in the label for the Status Indicator visualization.   I have the following dashboard: <dashboard version="1.1"> <label>Test dashboard</label> <row> <panel> <viz type="status_indicator_app.status_indicator"> <search> <query>| makeresults | eval partialA=15, totalA=57, partialB=132, totalB=543 | strcat partialA "/" totalA "V in " partialB "/" totalB "H" label | eval icon=if(totalA=0,"check","warning") | eval color=if(totalA=0,"green",if(partialA=0,"orange","red")) | fields label icon color</query> <earliest>-30d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="status_indicator_app.status_indicator.colorBy">field_value</option> <option name="status_indicator_app.status_indicator.fillTarget">text</option> <option name="status_indicator_app.status_indicator.fixIcon">warning</option> <option name="status_indicator_app.status_indicator.icon">field_value</option> <option name="status_indicator_app.status_indicator.precision">0</option> <option name="status_indicator_app.status_indicator.showOption">1</option> <option name="status_indicator_app.status_indicator.staticColor">#555</option> <option name="status_indicator_app.status_indicator.useColors">true</option> <option name="status_indicator_app.status_indicator.useThousandSeparator">true</option> </viz> </panel> </row> </dashboard> That displays 15/57V in 132/543H I would like it to display 15/57V in 132/543H I have tried using \n, <br/> and escaped versions of those to no avail. Is there a way to do what Iwant? Thanks!
Hi Team, I have a data in my archive folder since 2019 for one of my index app_o365 , we need to restore the complete data from archive bucket to searchable events . Below steps recommended but w... See more...
Hi Team, I have a data in my archive folder since 2019 for one of my index app_o365 , we need to restore the complete data from archive bucket to searchable events . Below steps recommended but while running rebuild command how can we run the 100s of folder data in single step ? do we need to run each and every folder ? Is there a way to run splunk rebuild for all db_ directories ?   Restoring a Frozen BucketTo thaw an archived bucket: – Copy the bucket directory from the archive to the index's thaweddb directory – Stop Splunk – – Run splunk rebuild path to bucket directory - Also works to recover a corrupted - Directory Does not count against license – Start Splunk I don't have any script to run the recovery process, if any one help here is much appreciated .
I have 2 different search queries and I want to calculate sum of differences between time of event 1 and event 2 (in hours) for a common field (customID) Query 1: index=xacin sourcetype="xaxd" "*... See more...
I have 2 different search queries and I want to calculate sum of differences between time of event 1 and event 2 (in hours) for a common field (customID) Query 1: index=xacin sourcetype="xaxd" "*Completed setting deactivation timer for*" OR "grace period" | rex "[cC]ustom:(?<customID>\w+)"| dedup customID| eval ltime=_time customID ltime wj 1678118565.572 bi8m 1678089668.915 nri 1678060951.505 Query 2: index=xacin sourcetype="xaxd" "*StatusHandler - Completed moving *" | rex "custom:(?<customID>\w+)"| dedup customID |eval rtime=_time customID rtime bi8m 1678118477.707 a2su 1678118456.775 ceo 1678118425.484 nri 1678089748.844 Since bi8m and nri are common customID, I need to output : (1678118477.707-1678089668.915) + (1678089748.844 -1678060951.505) = 57606.131  I tried to come up with the following query but clearly it's not working: index=xacin sourcetype="xaxd" "*Completed setting deactivation timer for*" OR "grace period" | rex "[cC]ustom:(?<customID>\w+) "| dedup customID| eval ltime=_time | append [search index=xacin sourcetype="xaxd" "*StatusHandler - Completed moving *" | rex "custom:(?<customID>\w+)"| dedup customID| eval rtime=_time | stats count by customID | where count > 1 | eval time_diff=(rtime-ltime)| stats sum(time_diff)
My Qualys VM detection pull stopped working. I found a new warning log.   TA-QualysCloudPlatform (host_detection): 2023-03-06 08:54:15 PID=30479 [Thread-3] WARNING: Failed to parse API Output for e... See more...
My Qualys VM detection pull stopped working. I found a new warning log.   TA-QualysCloudPlatform (host_detection): 2023-03-06 08:54:15 PID=30479 [Thread-3] WARNING: Failed to parse API Output for endpoint /api/2.0/fo/asset/host/vm/detection/. Message: XML or text declaration not at start of entity: line 7, column 0   Has anyone come across this? I have no idea where to start when it comes to troubleshooting.
We are using a clustered SH setup. I have a dashboard that lists all triggered alerts. When a user clicks on one of the list items, I would like to use the sid as a token to use as argument for loa... See more...
We are using a clustered SH setup. I have a dashboard that lists all triggered alerts. When a user clicks on one of the list items, I would like to use the sid as a token to use as argument for loadjob in another dashboard. The query is as simple as:     | loadjob <long-sid>     However currently when a row is clicked, the result is always "Search did not return any events. " I have configurered the tokens correctly and permissions also do not seem to be the issue. If I click the "open in search" button at the bottom of the dash I get the results of "| loadjob <sid>" as expected"  
Hi! I would like to anyone has scheduled an excel report based on an existing dashboard? I have create a dashboard that contains only one drop down in which the user will choose a single loca... See more...
Hi! I would like to anyone has scheduled an excel report based on an existing dashboard? I have create a dashboard that contains only one drop down in which the user will choose a single location. Then the dashboard displays the counts based on that location. Now I'd like to know if there's an easy way to have all that data (per location), and add them in one single Excel file. For example The dashboard looks like this: Location dropdown: All, Avenue1, Avenue2, Avenue3.... Displaying: Panel 1 Panel 2 Panel 3 Each panel is coming from 4 saved searches for that particular location, and it returns the numbers for WTD, MTD, QTD and YTD then appending the numbers to create the columns. I'd like to know if I can create an Excel file from this Dashboard that can be scheduled to run daily. In other words, for each location: All locations, Avenue1, Avenue2, and so on, it can be written into one single Excel (if that cannot be done, at least get each location per Excel file).  In that case, I'd have to create 7 Excel files (1 file per location) that will contain the different panels per location. What I was envisioning it follow but I don't know if it's possible. If anyone knows how I can approach this problem, it would be great or if they have a suggestion, a workaround, would be great too. Thank you so much in advance.   Dyana        
Hello Splunkers , I have the following sample data .I want to mask the data after VIN(bold) to xxxxxx before indexing .Also if the data is already ingested before need to. mask it on dashboards .I ... See more...
Hello Splunkers , I have the following sample data .I want to mask the data after VIN(bold) to xxxxxx before indexing .Also if the data is already ingested before need to. mask it on dashboards .I know SED command is used but I dont know how to use that    search ownership for claimnumber = " ----" with request payload={ "tID" : --------------- "adminInfo" : { }, "cNumber" : " --------", "ier" : " -----", "dateofloss": "------", "vehicleinformation": { "vin":  "2323213123123" , "vee": "A" } "tis":{ "state":"XYZ" }, "on": "N" "county":"-------" }     Thanks in Advance
Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Star... See more...
Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re excited to announce that the Use Case Explorer for the Splunk Platform has arrived! This new tool is designed to inspire as you develop new use cases using either Splunk Enterprise or Splunk Cloud Platform. We’ve also published a ton of new content covering a huge range of products, use cases, and industries! If you want to jump straight to our new articles, scroll to the bottom to read more.   Use Case Explorer for the Splunk Platform Whether you're a seasoned Splunk user or just getting started, the Use Case Explorer for the Splunk Platform is a great tool to help you implement new use cases using either Splunk Enterprise or Splunk Cloud Platform. It contains use cases that have been developed for five key industries - Financial Services, Healthcare, Retail, Technology Communications and Media, and Public Sector. Each of these industries operates in unique environments, with distinct challenges, so our use cases are carefully-tailored to fit these needs. Financial services, for example, holds a number of use cases to help customers detect fraud via ATMs, credit cards, and wire transfers. Healthcare contains guidance on maintaining HIPAA compliance. Or if you're looking to get inspired by a public sector use case, check out how NASA's ISS uses the Splunk platform to monitor metrics in its unique physical spaces. But wait, there's more! The Use Case Explorer also contains a plethora of use cases designed to help you achieve your Security and IT Modernization goals - even if you're not using Splunk's premium Security and Observability products. (If you are using these products, you can check out the guidance for them within the Use Case Explorer for Security and Use Case Explorer for Observability.) Like every use case in Lantern, every article comes with actionable, step-by-step guidance that you can follow to implement new use cases right away in your own environment. Head on over to the Use Case Explorer for the Splunk Platform now and see for yourself. Happy exploring!   Awesome New Articles Team Lantern, along with experts from all across Splunk, have been working their tails off this month to publish a heap of new articles for you to explore. We're talking use cases galore and a huge range of tips that will make your head spin (in a good way, we promise!) Here are a few to start with: Our Use Case Explorer for Security has undergone a number of new updates, with new Adoption Maturity guides to help you prepare for, implement, and measure a number of critical security outcomes. See the new guides here: Threat intelligence Risk-based alerting Automation and orchestration Cyber frameworks Data sources & normalization If you’re interested in learning about using MITRE ATT&CK with Splunk Enterprise Security, check out another new Use Case Explorer for Security article on Assessing and expanding MITRE ATT&CK coverage. It contains SPL queries you can run to assess your coverage, and step-by-steps you can follow to quickly expand it.  We’ve also made a few updates to the Use Case Explorer for Observability. Identifying DNS reliability and latency issues and Monitoring availability and performance in non-public applications are two new articles that help Splunk Infrastructure Monitoring users investigating Kubernetes network issues, and Splunk Synthetic Monitoring users who want to improve digital experience. We’re excited to have launched a new Getting Started Guide: Getting Started Guide for Log Observer Connect. Log Observer Connect is an integration that allows logs on Splunk Enterprise or Splunk Cloud Platform to be queried and associated with Related Content in Splunk Observability Cloud. This guide shows you how to get it set up, from ingesting logs to verifying success. Finally, Lantern is a home for FAQs relating to Splunk Enterprise upgrades, and we’ve released a Splunk 9.0.4 FAQ that addresses all the main questions you’ll have about updating to this version. Those are just a few highlights of what’s been published on Lantern this month. Here’s everything else that we haven’t mentioned yet: Building a data-driven law enforcement strategy Identifying DNS reliability and latency issues Detecting malicious activities with Sigma rules Setting data retention rules in Splunk Cloud Platform Securing infrastructure-as-code with Zscaler Posture Control Data source: JupiterOne Optimizing and automating SecOps with JupiterOne Leveraging critical vulnerability insights for effective incident response Setting up deployment server apps for the enterprise environment We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Customer Journey Content Curator for Splunk Lantern
Hello fellow Splunk developers, I need to use the selected labels from a multi value input in form of a token.  For a better explanation I created a short mock-up.     <form> <label>Test<... See more...
Hello fellow Splunk developers, I need to use the selected labels from a multi value input in form of a token.  For a better explanation I created a short mock-up.     <form> <label>Test</label> <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="input" searchWhenChanged="true"> <choice value="A">1</choice> <choice value="B">2</choice> <choice value="C">3</choice> <change> <set token="selectedLabel">$label$</set> <set token="selectedValue">$input$</set> </change> </input> <input type="radio" searchWhenChanged="true"> <label>$selectedLabel$</label> </input> <input type="radio" searchWhenChanged="true"> <label>$selectedValue$</label> </input> </fieldset> </form>     If now multiple values are selected the "selectedValue" token contains all the values selected, however the "selectedLabel" token only contains the first value selected as it can be seen in the picture below.   Is this a bug or the intended behavior? Is there a way how to store all labels inside a token?  Please note that the radio buttons serve only the purpose to show the token values in their label fields.       
Hi team, We are using the Splunk tool at the enterprise level I have received a requirement to refine and create  the logs in an efficient way which helps the run team to understand and analyse w... See more...
Hi team, We are using the Splunk tool at the enterprise level I have received a requirement to refine and create  the logs in an efficient way which helps the run team to understand and analyse whenever an issue comes. As a BA I need to write the requirements to create informative logs.  For example - a reference number needs to be included in the error message whenever an API fails. Can someone please advise or provide any documents/references to start with on what information needs to be provided to redefine such logs and generate alerts?
I used $result.fieldname$ but that only returns only one (the first) result of the search. Is there a way i can get all of the search results and loop over them if possible?
I successfully installed splunk using the ansible-role-for-splunk one a single machine. It worked as expected. I am trying now to deploy a distributed splunk system (7 VMs in total). I prepared the i... See more...
I successfully installed splunk using the ansible-role-for-splunk one a single machine. It worked as expected. I am trying now to deploy a distributed splunk system (7 VMs in total). I prepared the inventory based on https://github.com/splunk/ansible-role-for-splunk/blob/master/environments/production/inventory.yml. when i ran the playbook, the bahviour is 7 individual installations of splunk instead of a distributed installation with indexer cluster, search head etc. My understanding was that based on the group name in the inventory, ansible role will install only the required components. Is it not true? I am posting my playbook and inventory file (as first 2 replies). thanks  
Hi, I’m creating an application via the app wizard and I wish to render a custom view. I have a view.py file with the follow function,   def display_action1(provides, all_app_runs, context): ... See more...
Hi, I’m creating an application via the app wizard and I wish to render a custom view. I have a view.py file with the follow function,   def display_action1(provides, all_app_runs, context): import phantom.rules as phantom context['results'] = ['a', 'b', 'c'] phantom.debug('runing display_action1') print('runing display_action1') return 'display_action1.html'     which render the correct view in the widget. However, i want to be able to view the debug or print statements. where can i view this? Kind regards, Ansir