All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

date Scope 12/11/2020 Linux Shadow 17/02/2023 Linux Project 20/02/2023 Linux Project 21/02/2023 Linux Project 22/02/2023 Linux Project 23/02/2023 Lin... See more...
date Scope 12/11/2020 Linux Shadow 17/02/2023 Linux Project 20/02/2023 Linux Project 21/02/2023 Linux Project 22/02/2023 Linux Project 23/02/2023 Linux Project 24/02/2023 Linux Project 27/02/2023 Linux Project 28/02/2023 Linux Project 01/03/2023 Linux Project 01/03/2023 Linux Project 01/03/2023 Linux Project 02/03/2023 Linux projet 03/03/2023 Linux Project 03/03/2023 Linux Project 06/03/2023 Linux Project 06/03/2023 Linux Project we need to extract the lastest scope with respect to latest date,  The latest date is 06/03/2023, so its scope is linux project, we need to get this value and the result will be date Scope 01/03/2023 02/03/2023 03/03/2023 06/03/2023 12/11/2020 17/02/2023 20/02/2023 21/02/2023 22/02/2023 23/02/2023 24/02/2023 27/02/2023 28/02/2023 Linux Project
Hi team, I am very new to Splunk usage, just started using it recently. we are consuming around 60+ integration APIs in our Application. Whenever any API fails the logs should print with API name+... See more...
Hi team, I am very new to Splunk usage, just started using it recently. we are consuming around 60+ integration APIs in our Application. Whenever any API fails the logs should print with API name+error in Splunk logs, How to achieve it? Example- Getcustomerdetails failed with 500 error  
Dear Community, We know that there are several options to mask sensitive data before/during ingestion. But generally, how do you scan your data to check if there is any already existing leakage of s... See more...
Dear Community, We know that there are several options to mask sensitive data before/during ingestion. But generally, how do you scan your data to check if there is any already existing leakage of secrets/tokens/password? I've googled and searched community, but I did not find anything. I thought there is a Splunk app or Splunk ES has a built-in feature to do this, like a professional, fast, effective alert or an AI/ML assisted one. What I've done so far for a few indexes:   index={INDEXNAME} | stats values(*) AS * | transpose | table column | rename column AS Fieldnames | search Fieldnames=*secret* OR Fieldnames=*password*   (with last 15 minutes search interval) Is there any better solution out? Or do you have better idea to handle this? How are others doing this?  We have a Splunk Cloud Platform, but I think it would be the same for Enterprise as well. Thank you very much! Regards, DG  
Dear I have activated cloud platform of Splunk. Need to send windows machine log onto it.AS PREREQUISTE I HAVE SEEN TWO add-on for Microsoft windows in cloud Splunk platform. Which i am unable to ins... See more...
Dear I have activated cloud platform of Splunk. Need to send windows machine log onto it.AS PREREQUISTE I HAVE SEEN TWO add-on for Microsoft windows in cloud Splunk platform. Which i am unable to install on cloud kindly let me know way forward to install it .On installing its add-on says that incorrect username and password but user name and password are correct.  
Hi all I have demo Enterprise Security instance  IDX(1), SH(3), FWD(1), master and deployer(1) I got one SHC with SH(3) when Install app "splunk_app_stream" in deployer and deploy to SHC splunk... See more...
Hi all I have demo Enterprise Security instance  IDX(1), SH(3), FWD(1), master and deployer(1) I got one SHC with SH(3) when Install app "splunk_app_stream" in deployer and deploy to SHC splunkd works but splunk web acess not working.. I set this below  master_node -> replication_factor = 1, search_factor = 1 SH_node -> replication_factor = 3  I do not know what this problem is..
I am looking to chart a field that contains a request path but want to display and get a total count of all events that contain the root request path(a) and events that contain the root + <some guid>... See more...
I am looking to chart a field that contains a request path but want to display and get a total count of all events that contain the root request path(a) and events that contain the root + <some guid>/contents.(b) The path is a field I manually extracted called "request_path_in_request" Example of the path I want to combine in the cart: (a)path=/v4/layers/asPlanted (b)path=/v4/layers/asPlanted<some guid>/contents Here is my Splunk query so far: source="partners-api-ol" request_path_in_request="/v*" | timechart count by request_path_in_request useother=f limit=10 And here is how that field is getting charted: Is there a way to show only category of "/v4/layers/asPlanted" , but have the count be the total of all the events with that root path?
Currently, I have postgres system hosted on linux redhat. I have Uinersal Forwarder installed on this postgre system. I am configuring the inputs.conf file as below under /opt/splunk/etc/apps/SplunkF... See more...
Currently, I have postgres system hosted on linux redhat. I have Uinersal Forwarder installed on this postgre system. I am configuring the inputs.conf file as below under /opt/splunk/etc/apps/SplunkForwarder/local/inputs.conf [monitor:///var/lib/pgsql/data/log] disabled=0 crcSalt = <SOURCE> index = pgsql on Postgre, below are log files under /var/lib/pgsql/data/log postgresql-Fri.log postgresql-Mon.log postgres-Sat.log postgres-Tue.log   Issue here: I am not able to see the logs are coming in the above index (pgsql) instead it is coming to main index   Note: I have to use crcSalt = <SOURCE> due to how splunk reads the file based on 256 bytes character otheriwse I would not able to see the logs in any index.  
Hello  Having log like :  <182>Mar 1 18:18:24 SND1 Policy Manager severity=Info saf=1 safd=RACF record=Mar 1 13:17:31 SND1 baspm[67174579]: Compliance Failure='Sensitive Dataset=USS.SND2.VAR resi... See more...
Hello  Having log like :  <182>Mar 1 18:18:24 SND1 Policy Manager severity=Info saf=1 safd=RACF record=Mar 1 13:17:31 SND1 baspm[67174579]: Compliance Failure='Sensitive Dataset=USS.SND2.VAR resides on z/OS shared DASD volume=SN2U01 but is not part of SPM dataset filter=SHRD' [DS33795] i would extract the fields : SND1 as LPAR  field [DS33795] ad DISANUM field 'Sensitive Dataset=USS.SND2.VAR resides on z/OS shared DASD volume=SN2U01 but is not part of SPM dataset filter=SHRD' as DESCRIPTION field  Can you help me writing the regex ? i started to write the following "Compliance Failure" sourcetype="AMI SPM" | rex field=_raw "^(?:[^:\n]*:){2}\d+(?P<LPAR>\s+\w+)(?:[^\[\n]*\[){2}(?P<DISANUM>\w+)" offset_field=_extracted_fields_bounds | stats count by DISANUM but i m not able to get the string after Compliance Failure  into the field DDESCRIPTION Thanks in advance  Maurizio
We are using HCL BigFix and HCL Insights as a data warehouse.  There have been times when the import of data from HCL BigFix to HCL Insights has partially failed with no indication a failure has occu... See more...
We are using HCL BigFix and HCL Insights as a data warehouse.  There have been times when the import of data from HCL BigFix to HCL Insights has partially failed with no indication a failure has occurred.  We would like to verify the HCL Insights data imported into Splunk against the HCL BigFix databases.  Is there a way to run SPL that checks what's in Splunk against an external MS SQL database? I know how to create a db connector and setup a read only account.  But I don't want to import data from the database, just verify the data already in Splunk.     index=patch sourcetype="ibm:bigfix:Patch" | table BigFixDatabasePathTxt ComputerDNSNm ComputerId FixletId FixletIsRelevantInd FixletLastBecameRelevantDtm | join type=inner ComputerId [ | dbxquery query="select BigFixDatabasePathTxt ComputerDNSNm ComputerId FixletId FixletIsRelevantInd FixletLastBecameRelevantDtm from patch where {put SPL output here?}] We'd like the output to only show unmatched data.  
I am using the auto instrumentation for my .net core App (SignalFx Instrumentation) and would like to exclude the traces for requests to static files. How can I exclude the traces from being sent ? ... See more...
I am using the auto instrumentation for my .net core App (SignalFx Instrumentation) and would like to exclude the traces for requests to static files. How can I exclude the traces from being sent ? Thanks
I have the following string: SL=5601%20BLVD%20E%2C%20WESTON%20NEW%20YORK%2C%20NJ%20%2007093%20(WEST%20NEW%20YORK%20TOWN%2C%20HUDSON&f=json&outSR=%7B%22latestWkid%22%3A3857%2C%22wkid%22%3A102100%7D ... See more...
I have the following string: SL=5601%20BLVD%20E%2C%20WESTON%20NEW%20YORK%2C%20NJ%20%2007093%20(WEST%20NEW%20YORK%20TOWN%2C%20HUDSON&f=json&outSR=%7B%22latestWkid%22%3A3857%2C%22wkid%22%3A102100%7D I want to extract the address from this. I have tried regex with %20 and split but nothing works. 
Is there a way to create a line break in the label for the Status Indicator visualization.   I have the following dashboard: <dashboard version="1.1"> <label>Test dashboard</label> <row> <pa... See more...
Is there a way to create a line break in the label for the Status Indicator visualization.   I have the following dashboard: <dashboard version="1.1"> <label>Test dashboard</label> <row> <panel> <viz type="status_indicator_app.status_indicator"> <search> <query>| makeresults | eval partialA=15, totalA=57, partialB=132, totalB=543 | strcat partialA "/" totalA "V in " partialB "/" totalB "H" label | eval icon=if(totalA=0,"check","warning") | eval color=if(totalA=0,"green",if(partialA=0,"orange","red")) | fields label icon color</query> <earliest>-30d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="status_indicator_app.status_indicator.colorBy">field_value</option> <option name="status_indicator_app.status_indicator.fillTarget">text</option> <option name="status_indicator_app.status_indicator.fixIcon">warning</option> <option name="status_indicator_app.status_indicator.icon">field_value</option> <option name="status_indicator_app.status_indicator.precision">0</option> <option name="status_indicator_app.status_indicator.showOption">1</option> <option name="status_indicator_app.status_indicator.staticColor">#555</option> <option name="status_indicator_app.status_indicator.useColors">true</option> <option name="status_indicator_app.status_indicator.useThousandSeparator">true</option> </viz> </panel> </row> </dashboard> That displays 15/57V in 132/543H I would like it to display 15/57V in 132/543H I have tried using \n, <br/> and escaped versions of those to no avail. Is there a way to do what Iwant? Thanks!
Hi Team, I have a data in my archive folder since 2019 for one of my index app_o365 , we need to restore the complete data from archive bucket to searchable events . Below steps recommended but w... See more...
Hi Team, I have a data in my archive folder since 2019 for one of my index app_o365 , we need to restore the complete data from archive bucket to searchable events . Below steps recommended but while running rebuild command how can we run the 100s of folder data in single step ? do we need to run each and every folder ? Is there a way to run splunk rebuild for all db_ directories ?   Restoring a Frozen BucketTo thaw an archived bucket: – Copy the bucket directory from the archive to the index's thaweddb directory – Stop Splunk – – Run splunk rebuild path to bucket directory - Also works to recover a corrupted - Directory Does not count against license – Start Splunk I don't have any script to run the recovery process, if any one help here is much appreciated .
I have 2 different search queries and I want to calculate sum of differences between time of event 1 and event 2 (in hours) for a common field (customID) Query 1: index=xacin sourcetype="xaxd" "*... See more...
I have 2 different search queries and I want to calculate sum of differences between time of event 1 and event 2 (in hours) for a common field (customID) Query 1: index=xacin sourcetype="xaxd" "*Completed setting deactivation timer for*" OR "grace period" | rex "[cC]ustom:(?<customID>\w+)"| dedup customID| eval ltime=_time customID ltime wj 1678118565.572 bi8m 1678089668.915 nri 1678060951.505 Query 2: index=xacin sourcetype="xaxd" "*StatusHandler - Completed moving *" | rex "custom:(?<customID>\w+)"| dedup customID |eval rtime=_time customID rtime bi8m 1678118477.707 a2su 1678118456.775 ceo 1678118425.484 nri 1678089748.844 Since bi8m and nri are common customID, I need to output : (1678118477.707-1678089668.915) + (1678089748.844 -1678060951.505) = 57606.131  I tried to come up with the following query but clearly it's not working: index=xacin sourcetype="xaxd" "*Completed setting deactivation timer for*" OR "grace period" | rex "[cC]ustom:(?<customID>\w+) "| dedup customID| eval ltime=_time | append [search index=xacin sourcetype="xaxd" "*StatusHandler - Completed moving *" | rex "custom:(?<customID>\w+)"| dedup customID| eval rtime=_time | stats count by customID | where count > 1 | eval time_diff=(rtime-ltime)| stats sum(time_diff)
My Qualys VM detection pull stopped working. I found a new warning log.   TA-QualysCloudPlatform (host_detection): 2023-03-06 08:54:15 PID=30479 [Thread-3] WARNING: Failed to parse API Output for e... See more...
My Qualys VM detection pull stopped working. I found a new warning log.   TA-QualysCloudPlatform (host_detection): 2023-03-06 08:54:15 PID=30479 [Thread-3] WARNING: Failed to parse API Output for endpoint /api/2.0/fo/asset/host/vm/detection/. Message: XML or text declaration not at start of entity: line 7, column 0   Has anyone come across this? I have no idea where to start when it comes to troubleshooting.
We are using a clustered SH setup. I have a dashboard that lists all triggered alerts. When a user clicks on one of the list items, I would like to use the sid as a token to use as argument for loa... See more...
We are using a clustered SH setup. I have a dashboard that lists all triggered alerts. When a user clicks on one of the list items, I would like to use the sid as a token to use as argument for loadjob in another dashboard. The query is as simple as:     | loadjob <long-sid>     However currently when a row is clicked, the result is always "Search did not return any events. " I have configurered the tokens correctly and permissions also do not seem to be the issue. If I click the "open in search" button at the bottom of the dash I get the results of "| loadjob <sid>" as expected"  
Hi! I would like to anyone has scheduled an excel report based on an existing dashboard? I have create a dashboard that contains only one drop down in which the user will choose a single loca... See more...
Hi! I would like to anyone has scheduled an excel report based on an existing dashboard? I have create a dashboard that contains only one drop down in which the user will choose a single location. Then the dashboard displays the counts based on that location. Now I'd like to know if there's an easy way to have all that data (per location), and add them in one single Excel file. For example The dashboard looks like this: Location dropdown: All, Avenue1, Avenue2, Avenue3.... Displaying: Panel 1 Panel 2 Panel 3 Each panel is coming from 4 saved searches for that particular location, and it returns the numbers for WTD, MTD, QTD and YTD then appending the numbers to create the columns. I'd like to know if I can create an Excel file from this Dashboard that can be scheduled to run daily. In other words, for each location: All locations, Avenue1, Avenue2, and so on, it can be written into one single Excel (if that cannot be done, at least get each location per Excel file).  In that case, I'd have to create 7 Excel files (1 file per location) that will contain the different panels per location. What I was envisioning it follow but I don't know if it's possible. If anyone knows how I can approach this problem, it would be great or if they have a suggestion, a workaround, would be great too. Thank you so much in advance.   Dyana        
Hello Splunkers , I have the following sample data .I want to mask the data after VIN(bold) to xxxxxx before indexing .Also if the data is already ingested before need to. mask it on dashboards .I ... See more...
Hello Splunkers , I have the following sample data .I want to mask the data after VIN(bold) to xxxxxx before indexing .Also if the data is already ingested before need to. mask it on dashboards .I know SED command is used but I dont know how to use that    search ownership for claimnumber = " ----" with request payload={ "tID" : --------------- "adminInfo" : { }, "cNumber" : " --------", "ier" : " -----", "dateofloss": "------", "vehicleinformation": { "vin":  "2323213123123" , "vee": "A" } "tis":{ "state":"XYZ" }, "on": "N" "county":"-------" }     Thanks in Advance
Hello fellow Splunk developers, I need to use the selected labels from a multi value input in form of a token.  For a better explanation I created a short mock-up.     <form> <label>Test<... See more...
Hello fellow Splunk developers, I need to use the selected labels from a multi value input in form of a token.  For a better explanation I created a short mock-up.     <form> <label>Test</label> <fieldset submitButton="false" autoRun="true"> <input type="multiselect" token="input" searchWhenChanged="true"> <choice value="A">1</choice> <choice value="B">2</choice> <choice value="C">3</choice> <change> <set token="selectedLabel">$label$</set> <set token="selectedValue">$input$</set> </change> </input> <input type="radio" searchWhenChanged="true"> <label>$selectedLabel$</label> </input> <input type="radio" searchWhenChanged="true"> <label>$selectedValue$</label> </input> </fieldset> </form>     If now multiple values are selected the "selectedValue" token contains all the values selected, however the "selectedLabel" token only contains the first value selected as it can be seen in the picture below.   Is this a bug or the intended behavior? Is there a way how to store all labels inside a token?  Please note that the radio buttons serve only the purpose to show the token values in their label fields.       
Hi team, We are using the Splunk tool at the enterprise level I have received a requirement to refine and create  the logs in an efficient way which helps the run team to understand and analyse w... See more...
Hi team, We are using the Splunk tool at the enterprise level I have received a requirement to refine and create  the logs in an efficient way which helps the run team to understand and analyse whenever an issue comes. As a BA I need to write the requirements to create informative logs.  For example - a reference number needs to be included in the error message whenever an API fails. Can someone please advise or provide any documents/references to start with on what information needs to be provided to redefine such logs and generate alerts?