All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Here is the SPL:   index=name reqHost="host" | rex field=cookie "care_did=(?<care_did>[a-z0-9-]+)" | rex field=cookie "n_vis=(?<n_vis>[a-z0-9-\.]+)" | stats avg(_time) as _time, dc(care_did) as c... See more...
Here is the SPL:   index=name reqHost="host" | rex field=cookie "care_did=(?<care_did>[a-z0-9-]+)" | rex field=cookie "n_vis=(?<n_vis>[a-z0-9-\.]+)" | stats avg(_time) as _time, dc(care_did) as care_did_count, values(care_did) by n_vis   Any help on this is appreciated.
Hi,  I have a dashboard that I have to create on Dashboard Studio, but I also have to use a dropdown input that gets its values from a lookup. So far I didn't find any way to create the input bas... See more...
Hi,  I have a dashboard that I have to create on Dashboard Studio, but I also have to use a dropdown input that gets its values from a lookup. So far I didn't find any way to create the input based on " | inputlookup <lookup_name>" search results. Can someone assist with that? Thanks in advance.
Hello Splunkers ,   I am trying to figure out what is the best approach or steps to migrate all the knowledge objects(searches, dashboards , fields alias etc) and apps from one search head cluster ... See more...
Hello Splunkers ,   I am trying to figure out what is the best approach or steps to migrate all the knowledge objects(searches, dashboards , fields alias etc) and apps from one search head cluster to another search head cluster.   What would be the steps to perform and things to do in order to move them.   Thanks in advance 
I've got a handful of files that seem to be ingested multiple times, though can't quite figure out why. File is a tomcat log and name is in the format hostname-stderr-dd-mm-yyyy.log, and does not rol... See more...
I've got a handful of files that seem to be ingested multiple times, though can't quite figure out why. File is a tomcat log and name is in the format hostname-stderr-dd-mm-yyyy.log, and does not roll. Around once a day, but sometimes every other day or twice a day the file will be re-ingested with splunkd.log entry indicating :       02-23-2022 08:43:33.602 -0500 INFO WatchedFile [10484 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='C:\Tomcat.....       I've set crcSalt=<SOURCE> and played with initCrcLength to no avail and everything in Answers referencing these splunkd entries that I've found indicates to change the crcSalt or initCrcLength settings, so I'm just trying to ensure I understand what exactly seekptr is referring to here. Please correct me if I'm mistaken but I think the 'seekptr' is the 'seekAddress' (making the checksum for it the 'seekCRC') referenced in the below doc page, so my assumption is that the seekAddress is found but the CRC has somehow changed, so Splunk assumes the file is different. The problem is, after looking at the file before/after this happens, I see no reason why this CRC would have changed, and any amount of toying with crcSalt or initCrcLength won't make a difference here as it isn't the 'init' bit that's changing. I've got a dashboard set up showing the same events repeated with the same timestamp but different ingest times correlating with the above splunkd.log entries. My only theory is that somehow Splunk indexed the file mid-write by the application if this is even possible? Other log files for this same application and location don't seem to do this and I've not been able to find any known bugs specific to Tomcat stderr files (though certainly possible our people are doing something weird with log config). Relevant inputs.conf stanza:       [monitor://C:\Tomcat-*\logs\*stderr*.log] index=app_logs sourcetype=stderr ignoreOlderThan=1d crcSalt=<SOURCE>       I've also manually put CHECK_METHOD=endpoint_md5 in props.conf in case somehow the check_method for stderr got changed from the default somewhere along the way, and I've also confirmed that this isn't happening when the file modified timestamp is updated. Next time I have some free time I plan to grab another copy of the file before/after and figure out a way to grab the seekptr and associated crc and compare them myself based on debug logs. ref: https://docs.splunk.com/Documentation/Splunk/8.2.4/Data/Howlogfilerotationishandled
Hi,    I would like to create a dashboard to display uptime. I have a CSV file where we have time field (15 mins bin) start at 00:00:00 to 23:45.. there is a value field had sum up to calculate out... See more...
Hi,    I would like to create a dashboard to display uptime. I have a CSV file where we have time field (15 mins bin) start at 00:00:00 to 23:45.. there is a value field had sum up to calculate outage time. how do we calculate uptime from outage time.
Hi, I'm trying to create a table as below: method lat lon blue 3578114 4960035 red     green     yellow 3578113 4960032 I tried using split  but I don't get the co... See more...
Hi, I'm trying to create a table as below: method lat lon blue 3578114 4960035 red     green     yellow 3578113 4960032 I tried using split  but I don't get the correct order as shown below.  method lat lon blue red green yellow 3578113 3578114 4960032 4960035 An excerpt of the the xml is below  and I'm able to extract the correct  order if I use xpath but sometimes the xml source file has  extra data at the end which prevent xpath from reading the data.   Is there a way to read the "method" elements  below besides using xpath such as  | xpath outfield=lat_blue "//response_data/position_data/PositioningMethodAndUsage[@method='blue']/position_estimate/pointWithAltitudeAndUncertaintyEllipsoid/geographicalCoordinates/lat" I want to bullet proof this in case the xml file is broken. <file> <reference_id>12345678</reference_id> <session_id>1256555</session_id> <positioning_request_time utc_off="-0800">19800228082202</positioning_request_time> <network type="iden"></network> <response_data type="Success"> <position_data> <PositioningMethodAndUsage method="blue" locationReturn="NO"> <positionresultCode>99</positionresultCode> <timeStamp utc_off="-0800">20220228082203</timeStamp> </PositioningMethodAndUsage> <PositioningMethodAndUsage method="red" locationReturn="NO"> <positionresultCode>99</positionresultCode> <timeStamp utc_off="-0800">20220228082203</timeStamp> </PositioningMethodAndUsage> <PositioningMethodAndUsage method="green" sourceOfAltitude="3D" locationReturn="YES"> <positionresultCode>1</positionresultCode> <position_estimate> <pointWithAltitudeAndUncertaintyEllipsoid> <geographicalCoordinates> <latSign type="North"></latSign> <lat>3878113</lat> <lon>-4360032</lon> </geographicalCoordinates> <altitudeAndDirection> <directionOfAltitude>height</directionOfAltitude> <altitude>232</altitude> </altitudeAndDirection> </pointWithAltitudeAndUncertaintyEllipsoid> </position_estimate> </PositioningMethodAndUsage> <PositioningMethodAndUsage method="yellow" locationReturn="NO"> <positionresultCode>1</positionresultCode> <position_estimate> <pointWithAltitudeAndUncertaintyEllipsoid> <geographicalCoordinates> <latSign type="North"></latSign> <lat>3878114</lat> <lon>-4360035</lon> </PositioningMethodAndUsage> </response_data> </file>
I have a small environment.  I have 3 users that are allowed to login to a particular server.  If I search: index=<index name>  user=<username>  OR user=<username> OR user=<username> I find all i... See more...
I have a small environment.  I have 3 users that are allowed to login to a particular server.  If I search: index=<index name>  user=<username>  OR user=<username> OR user=<username> I find all instances of them logging in.  How can I find users that are not equal to those 3 users?  I want to set up an alert that will let me know when someone other than those 3 are trying to log in.
I am trying to setup our Splunk architecture to be able to receive events from clients/workstations outside our local network. The simplest solution is just making the main indexer externally accessi... See more...
I am trying to setup our Splunk architecture to be able to receive events from clients/workstations outside our local network. The simplest solution is just making the main indexer externally accessible, but we don't want to do that. Is there a way to setup a Heavy Forwarder like a proxy to receive events from external clients and then send them to the main indexer? I haven't been able to find anything related to this when I try to research. Thanks.
Hello dear colleagues, has anyone encountered this error, I checked search.log for inconsistent metadata. Help me decide. I have a request in SH, when I drive it I get this error
Hi, I have a javascript file and I want it to be applicable to all dashboards Are there any way to do that but not copy and paste the reference to each  html file ? Thanks.
Hi, i am trying to force user to use en-US as locale even if they try to use any other. If they try to replace en-US in url to any other locale, it will redirect back to en-US. Are there any ways t... See more...
Hi, i am trying to force user to use en-US as locale even if they try to use any other. If they try to replace en-US in url to any other locale, it will redirect back to en-US. Are there any ways to solve my problem ?
Any help is greatly appreciated.   How to convert the following json into a table? { "Summary":{ "jobType":"jobA", "summaryId":22746666, "objectsArchived":[ { "name":"tableA", "count":85... See more...
Any help is greatly appreciated.   How to convert the following json into a table? { "Summary":{ "jobType":"jobA", "summaryId":22746666, "objectsArchived":[ { "name":"tableA", "count":855 }, { "name":"tableB", "count":678 } ] } }   Jobtype | SummaryId | Table | Count jobA.        | 22746666.  | tableA | 855 jobA.         | 22746666  | tableB | 678
Hey there, I have a field let's say "abc" with values as such : 1,3,5,7,5,3,2,1,5,7,8,5,1,1,2,2,3,2,1,1,2,3,2,3 here what I am trying to look here is first do a stats count by abc | where count >... See more...
Hey there, I have a field let's say "abc" with values as such : 1,3,5,7,5,3,2,1,5,7,8,5,1,1,2,2,3,2,1,1,2,3,2,3 here what I am trying to look here is first do a stats count by abc | where count > 2  and then again do a stats dc(abc) by "some other field"   I have tried do to it but unable to get any results not sure if there is any other option to perform it. thanks
I have an accelerated CIM data model. The indexes used to populate the datamodel (and accelerated summaries) are defined by a macro (a typical CIM approach - cim_Email_indexes, cim_Network_Traffic_i... See more...
I have an accelerated CIM data model. The indexes used to populate the datamodel (and accelerated summaries) are defined by a macro (a typical CIM approach - cim_Email_indexes, cim_Network_Traffic_indexes and so on). What will happen if I change this macro to include additional index? Will splunk: a) Just add data from new index to next summary rebuild starting from the last summarized timestamp? b) Add data from new index looking back up to Summary Range  the during next rebuild? c) Rebuild whole summaries back up to Summary Range?  
Hi all, I'm trying to set up the Splunk Ad-On for Microsoft O365  https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configuretenant When adding a tenant I receive an error message: "Co... See more...
Hi all, I'm trying to set up the Splunk Ad-On for Microsoft O365  https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configuretenant When adding a tenant I receive an error message: "ConnectionResetError 104" What could be the reason? What are all the required Azure URL's that need to be connected by the add-on?  Thank you in advance
is  there a way for a user without admin privileges to export existing lookup file information locally after processing and upload CSV with the same file name after manual update?
Hi all, does anyone knows if there's any way to make transaction start and end with the proper results. I have a transaction URL startswith=STATUS=FAIL endswith=STATUS=PASS. The data has pattern li... See more...
Hi all, does anyone knows if there's any way to make transaction start and end with the proper results. I have a transaction URL startswith=STATUS=FAIL endswith=STATUS=PASS. The data has pattern like FAIL,PASS,FAIL,PASS,PASS,FAIL,FAIL,FAIL,PASS... The transaction command doesn't work well. My requirement is to get the immediate PASS URL after the FAIL one. In a situation like FAIL...... PASS will take the last part of FAIL, PASS. I want it to take FAIL..............PASS. Does anyone know how to do this?
Does Splunk support HIDS-features like monitoring the data-traffic and suspicious activities on the computer infrastructure? 
Hello, Did anyone else encountered this problem on a Search Head?  KV Store changed status to failed. No suitable servers found: `serverSelectionTimeoutMS` expired. I tried all the solutions that ... See more...
Hello, Did anyone else encountered this problem on a Search Head?  KV Store changed status to failed. No suitable servers found: `serverSelectionTimeoutMS` expired. I tried all the solutions that I could find related to this problem, but without success. 
Hi I'm trying to group items by a specific field, and get all the values returned (i.e. without aggregation). I have the following: I'm trying to convert that to: I have tried    ... See more...
Hi I'm trying to group items by a specific field, and get all the values returned (i.e. without aggregation). I have the following: I'm trying to convert that to: I have tried      | chart values(value) by field | transpose header_field=field     However the values(value) only selects unique values - I'm looking for all values.