All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi, i know many have answered this question before but i didn't find any perfect and detailed answer. Setup :- UF ---> HF ----->  IDX Q1. i have a file called test.txt ( Location :- /send/te... See more...
hi, i know many have answered this question before but i didn't find any perfect and detailed answer. Setup :- UF ---> HF ----->  IDX Q1. i have a file called test.txt ( Location :- /send/test.txt ). i want to send the txt file from to UF to HF to IDX.                ( Receiving  ports 9997 is open and configured for all as well )         How do i do it ?
Hi  I have a panel with query below index=int_166167 env = SIT appName="GCR" message="Post Login*"| bucket _time span= 15m| stats count(userId) as loginUsers ,min(timeTaken) as minSLA,max(timeTak... See more...
Hi  I have a panel with query below index=int_166167 env = SIT appName="GCR" message="Post Login*"| bucket _time span= 15m| stats count(userId) as loginUsers ,min(timeTaken) as minSLA,max(timeTaken) as maxSLAcount by _time | sort -_time|table  _time,loginUsers,minSLA,maxSLA the panel appears as like below time loginUsers minSLA maxSLA 28-02-2022 11:00 45 12 67 28-02-2022 11:15 60 13 74 28-02-2022 11:30 35 25 82 28-02-2022 11:45 46 34 45 28-02-2022 11:00 70 57 90 28-02-2022 12:00 35 24 57 My requirement is like on click of the maxSLA value (for ex:90) it should link to search which shows the result of particular one max SLA event with 90 from those 70 users Kindly help on this.   
We utilise Enterprise Security and have a large number of detections that we use.  We have recently put in some testing hardware that could trigger any one of these alerts and I am trying to find out... See more...
We utilise Enterprise Security and have a large number of detections that we use.  We have recently put in some testing hardware that could trigger any one of these alerts and I am trying to find out if there is someway that we could suppress or exclude a device if that host potentially triggered these rules.  Is there a way to effectively do a global "ignore any alerts from xxxx" without having to edit every single rule?
We are currently auditing the OSB Splunk user access accounts for both of our instances. Unfortunately Splunk doesn't show or display under its user settings when an account has been disabled from... See more...
We are currently auditing the OSB Splunk user access accounts for both of our instances. Unfortunately Splunk doesn't show or display under its user settings when an account has been disabled from LDAP (at the moment all the accounts are shown as 'Active'). Also, since user authentication has been configured by using LDAP, can you please confirm or advise why is not possible to display the status of an account in Splunk UI as disabled when they have been disabled in LDAP?
So I want to create an alert if one of our server is not connected, but the server disconnects automatically for every 12 hours and reconnects again in a few minutes, so I only need an alert triggere... See more...
So I want to create an alert if one of our server is not connected, but the server disconnects automatically for every 12 hours and reconnects again in a few minutes, so I only need an alert triggered if the server does not reconnect within 10minutes of it getting disconnected. *SourceName="AppLog" Message="service status *"* there are two logs that occur, one is service status started and service status stopped. I need the alert triggered only if the service status started log does not appear within 10min of service stopped log message
Hi, I hope this is the right board to ask these questions, apologies if it's not: I have two issues with my account preferences at the moment. I work for a large organsiation and there is dedicat... See more...
Hi, I hope this is the right board to ask these questions, apologies if it's not: I have two issues with my account preferences at the moment. I work for a large organsiation and there is dedicated team looking after the Splunk Cloud platform we use here 1. A while ago, my search formatting preferences changed somehow so that search terms like stats no longer get colours highlighting themselves to distinguish them in my searches. Now they are just plain text. I don't think I did anything to cause this. 2. I then realised that my default app preference is not being applied when I start Splunk. Is this something I can fix, or do I need my admins to do it - they are all so busy right now, I don't want to hassle them with something minor like this. My preferences all look like they are set correctly for both these things to work. Thanks for any assistance, I know you are all probably busy too! Regards, John
I have a search done on splunk and I need to take the output I receive and multiply it by 2.    My search query is:  index=app1 AND service=app AND logLevel=INFO AND environment=staging "messag... See more...
I have a search done on splunk and I need to take the output I receive and multiply it by 2.    My search query is:  index=app1 AND service=app AND logLevel=INFO AND environment=staging "message.eventAction"=COMPLETE_CREATE | stats dc(message.userId) Upon using this search, I receive a distinct count of 8, but I want that number to multiply by 2. I cannot seem to figure out how to do this after reading other similar searches. I hope someone can help, it seems like it should not be this difficult for a simple multiplication. 
Hello Team,  I have a lookup table with 1000 employees data into it, like email, id and other  I have an search which also produces the same result like employee email, id, and status  I want to ... See more...
Hello Team,  I have a lookup table with 1000 employees data into it, like email, id and other  I have an search which also produces the same result like employee email, id, and status  I want to combine both of them so my search produces data only for employees who are in lookup table  I tried passing lookup but its fetching all data    this is what I am using "EmployeeEmail is an term in lookup table" index=Employeedata sourcetype=data |lookup InT_EM as EmployeeEmail |table EmployeeEmail, status  
We are getting the DHCP logs from Windows.  I am trying to ingest these logs into UBA however UBA requires the lease_duration.  Where is this field... None of my logs has this or the alert code for i... See more...
We are getting the DHCP logs from Windows.  I am trying to ingest these logs into UBA however UBA requires the lease_duration.  Where is this field... None of my logs has this or the alert code for it.  
I am looking for a best way to prepare for disaster recovery to a remote site. We have 5 nodes indexer cluster and wanted to get the best backup and  strategy so I can create a daily backup that ca... See more...
I am looking for a best way to prepare for disaster recovery to a remote site. We have 5 nodes indexer cluster and wanted to get the best backup and  strategy so I can create a daily backup that can be , in case of disaster, restored to a single server instance of Splunk enterprise to provide minimum functionality during main site unavailability. The strategy I am looking for should  be able to construct a single tar compressed file that consolidate all buckets from the 5-node indexer cluster and restore it to a single node server. is there a way to construct a single backup (no bucket replication) from an indexer cluster?
Here is the SPL:   index=name reqHost="host" | rex field=cookie "care_did=(?<care_did>[a-z0-9-]+)" | rex field=cookie "n_vis=(?<n_vis>[a-z0-9-\.]+)" | stats avg(_time) as _time, dc(care_did) as c... See more...
Here is the SPL:   index=name reqHost="host" | rex field=cookie "care_did=(?<care_did>[a-z0-9-]+)" | rex field=cookie "n_vis=(?<n_vis>[a-z0-9-\.]+)" | stats avg(_time) as _time, dc(care_did) as care_did_count, values(care_did) by n_vis   Any help on this is appreciated.
Hi,  I have a dashboard that I have to create on Dashboard Studio, but I also have to use a dropdown input that gets its values from a lookup. So far I didn't find any way to create the input bas... See more...
Hi,  I have a dashboard that I have to create on Dashboard Studio, but I also have to use a dropdown input that gets its values from a lookup. So far I didn't find any way to create the input based on " | inputlookup <lookup_name>" search results. Can someone assist with that? Thanks in advance.
Hello Splunkers ,   I am trying to figure out what is the best approach or steps to migrate all the knowledge objects(searches, dashboards , fields alias etc) and apps from one search head cluster ... See more...
Hello Splunkers ,   I am trying to figure out what is the best approach or steps to migrate all the knowledge objects(searches, dashboards , fields alias etc) and apps from one search head cluster to another search head cluster.   What would be the steps to perform and things to do in order to move them.   Thanks in advance 
I've got a handful of files that seem to be ingested multiple times, though can't quite figure out why. File is a tomcat log and name is in the format hostname-stderr-dd-mm-yyyy.log, and does not rol... See more...
I've got a handful of files that seem to be ingested multiple times, though can't quite figure out why. File is a tomcat log and name is in the format hostname-stderr-dd-mm-yyyy.log, and does not roll. Around once a day, but sometimes every other day or twice a day the file will be re-ingested with splunkd.log entry indicating :       02-23-2022 08:43:33.602 -0500 INFO WatchedFile [10484 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='C:\Tomcat.....       I've set crcSalt=<SOURCE> and played with initCrcLength to no avail and everything in Answers referencing these splunkd entries that I've found indicates to change the crcSalt or initCrcLength settings, so I'm just trying to ensure I understand what exactly seekptr is referring to here. Please correct me if I'm mistaken but I think the 'seekptr' is the 'seekAddress' (making the checksum for it the 'seekCRC') referenced in the below doc page, so my assumption is that the seekAddress is found but the CRC has somehow changed, so Splunk assumes the file is different. The problem is, after looking at the file before/after this happens, I see no reason why this CRC would have changed, and any amount of toying with crcSalt or initCrcLength won't make a difference here as it isn't the 'init' bit that's changing. I've got a dashboard set up showing the same events repeated with the same timestamp but different ingest times correlating with the above splunkd.log entries. My only theory is that somehow Splunk indexed the file mid-write by the application if this is even possible? Other log files for this same application and location don't seem to do this and I've not been able to find any known bugs specific to Tomcat stderr files (though certainly possible our people are doing something weird with log config). Relevant inputs.conf stanza:       [monitor://C:\Tomcat-*\logs\*stderr*.log] index=app_logs sourcetype=stderr ignoreOlderThan=1d crcSalt=<SOURCE>       I've also manually put CHECK_METHOD=endpoint_md5 in props.conf in case somehow the check_method for stderr got changed from the default somewhere along the way, and I've also confirmed that this isn't happening when the file modified timestamp is updated. Next time I have some free time I plan to grab another copy of the file before/after and figure out a way to grab the seekptr and associated crc and compare them myself based on debug logs. ref: https://docs.splunk.com/Documentation/Splunk/8.2.4/Data/Howlogfilerotationishandled
Hi,    I would like to create a dashboard to display uptime. I have a CSV file where we have time field (15 mins bin) start at 00:00:00 to 23:45.. there is a value field had sum up to calculate out... See more...
Hi,    I would like to create a dashboard to display uptime. I have a CSV file where we have time field (15 mins bin) start at 00:00:00 to 23:45.. there is a value field had sum up to calculate outage time. how do we calculate uptime from outage time.
Hi, I'm trying to create a table as below: method lat lon blue 3578114 4960035 red     green     yellow 3578113 4960032 I tried using split  but I don't get the co... See more...
Hi, I'm trying to create a table as below: method lat lon blue 3578114 4960035 red     green     yellow 3578113 4960032 I tried using split  but I don't get the correct order as shown below.  method lat lon blue red green yellow 3578113 3578114 4960032 4960035 An excerpt of the the xml is below  and I'm able to extract the correct  order if I use xpath but sometimes the xml source file has  extra data at the end which prevent xpath from reading the data.   Is there a way to read the "method" elements  below besides using xpath such as  | xpath outfield=lat_blue "//response_data/position_data/PositioningMethodAndUsage[@method='blue']/position_estimate/pointWithAltitudeAndUncertaintyEllipsoid/geographicalCoordinates/lat" I want to bullet proof this in case the xml file is broken. <file> <reference_id>12345678</reference_id> <session_id>1256555</session_id> <positioning_request_time utc_off="-0800">19800228082202</positioning_request_time> <network type="iden"></network> <response_data type="Success"> <position_data> <PositioningMethodAndUsage method="blue" locationReturn="NO"> <positionresultCode>99</positionresultCode> <timeStamp utc_off="-0800">20220228082203</timeStamp> </PositioningMethodAndUsage> <PositioningMethodAndUsage method="red" locationReturn="NO"> <positionresultCode>99</positionresultCode> <timeStamp utc_off="-0800">20220228082203</timeStamp> </PositioningMethodAndUsage> <PositioningMethodAndUsage method="green" sourceOfAltitude="3D" locationReturn="YES"> <positionresultCode>1</positionresultCode> <position_estimate> <pointWithAltitudeAndUncertaintyEllipsoid> <geographicalCoordinates> <latSign type="North"></latSign> <lat>3878113</lat> <lon>-4360032</lon> </geographicalCoordinates> <altitudeAndDirection> <directionOfAltitude>height</directionOfAltitude> <altitude>232</altitude> </altitudeAndDirection> </pointWithAltitudeAndUncertaintyEllipsoid> </position_estimate> </PositioningMethodAndUsage> <PositioningMethodAndUsage method="yellow" locationReturn="NO"> <positionresultCode>1</positionresultCode> <position_estimate> <pointWithAltitudeAndUncertaintyEllipsoid> <geographicalCoordinates> <latSign type="North"></latSign> <lat>3878114</lat> <lon>-4360035</lon> </PositioningMethodAndUsage> </response_data> </file>
I have a small environment.  I have 3 users that are allowed to login to a particular server.  If I search: index=<index name>  user=<username>  OR user=<username> OR user=<username> I find all i... See more...
I have a small environment.  I have 3 users that are allowed to login to a particular server.  If I search: index=<index name>  user=<username>  OR user=<username> OR user=<username> I find all instances of them logging in.  How can I find users that are not equal to those 3 users?  I want to set up an alert that will let me know when someone other than those 3 are trying to log in.
I am trying to setup our Splunk architecture to be able to receive events from clients/workstations outside our local network. The simplest solution is just making the main indexer externally accessi... See more...
I am trying to setup our Splunk architecture to be able to receive events from clients/workstations outside our local network. The simplest solution is just making the main indexer externally accessible, but we don't want to do that. Is there a way to setup a Heavy Forwarder like a proxy to receive events from external clients and then send them to the main indexer? I haven't been able to find anything related to this when I try to research. Thanks.
Hello dear colleagues, has anyone encountered this error, I checked search.log for inconsistent metadata. Help me decide. I have a request in SH, when I drive it I get this error
Hi, I have a javascript file and I want it to be applicable to all dashboards Are there any way to do that but not copy and paste the reference to each  html file ? Thanks.