All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a saved search in which I successfully pass and use a token from a dashboard to drive it. I then added a second token to use in a secondary search and this always fails. I have a dashboard pan... See more...
I have a saved search in which I successfully pass and use a token from a dashboard to drive it. I then added a second token to use in a secondary search and this always fails. I have a dashboard panel that does the following: |savedsearch "Locations - Super Query" source_region=blue@bimlocs-p-ew1 | eval categoryIn = case("$category$"="ALL", "", 1=1, "$category$") | search category=categoryIn | table agent, category, categoryIn, containerId, line.ul-operation, line.ul-log-data.http_response_code, oxygenId, _time, line.ul-log-data.http_url, source* If I remove the secondary search using the eval parameter from the token it works and the token string correctly appears in the table results. The strings are identical in the results. I don't understand what is wrong with the secondary search. I've tried = and other approaches for the string comparison which works everywhere else, but always fails in my secondary search. I must be missing something obvious as to why. Any help is appreciated.
thought I created a user, but it's not there. then creating it, the message Cannot create user that already exists pops up... where is it?
Hello ,          I am noticing lot of stalled IE instances in browser snapshots . However i do not see it from the application side. One common thing i notice in the stalled pages is delay in start ... See more...
Hello ,          I am noticing lot of stalled IE instances in browser snapshots . However i do not see it from the application side. One common thing i notice in the stalled pages is delay in start of adrum call. What might be causing this delay . Below is one of the my session which is marked as stalled in appdynamics how ever i did not notice the delay from the application side . Thanks,
For the following example JSON message (formated to make it easier to read), how can I configure props.conf to inform Splunk that it should use data.timestamp for its event timestamp? { "publ... See more...
For the following example JSON message (formated to make it easier to read), how can I configure props.conf to inform Splunk that it should use data.timestamp for its event timestamp? { "publish_time": 1580824871.446, "data": { "textPayload": "DEBUG | 2020-02-04T14:01:05,760 | A very long string here...<snip>", "logName": "blah0", "receiveTimestamp": "2020-02-04T14:01:07.707699223Z", "labels": { "k8s-pod/version": "blah2", "k8s-pod/track": "blah3", "k8s-pod/app": "blah4", "k8s-pod/pod-template-hash": "blah5" }, "insertId": "blah6", "resource": { "type": "k8s_container", "labels": { "project_id": "blah7", "pod_name": "blah8", "cluster_name": "blah9", "location": "blah10", "container_name": "blah11", "namespace_name": "blah12" } }, "severity": "INFO", "timestamp": "2020-02-04T14:01:05.760888513Z" }, "attributes": { "logging.googleapis.com/timestamp": "2020-02-04T14:01:05.760888513Z" } } Would the following be correct & performant ? File: props.conf ---snip--- [google:gcp:pubsub:message] INDEXED_EXTRACTIONS = json KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = data.timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9N%Z ---snip---
I have several lookup tables containing various data types filenames hashes emails usernames etc (lookup tables are separated by data type), each of these lookup tables also have a UUID column for a ... See more...
I have several lookup tables containing various data types filenames hashes emails usernames etc (lookup tables are separated by data type), each of these lookup tables also have a UUID column for a specific entry, so the CSV headers for filename date look like : "fileName","uuid" "fileName" data may actually only be a partial filename Within the context of the CSV neither of these columns' data is unique, but together fileName+UUID data are. QUESTION: Given a query such as the one below, which returns interesting events, I need help implementing SPL to add a dict (for example: {"matchedValue": value, "UUIDS:[uuid1,uuid2,uuid**n]}) to each event, what SPL do I need to add? -note this does not neccessarily need to be a dict, adding two fields to each event one "matchedValue" field and a "UUIDS" field with a delimited string of UUIDS works too. index=USB_activity_data [|inputlookup interesting-filenames.csv | fields fileName | rename fileName as query] END GOAL: My goal is to push these modified events to another inhouse non-Splunk application, to achive this I've started working on my first Splunk App with the Python SDK (I've played with other Splunk Python apps before, but this is my first from scratch). I've framed a StreamingCommand in this app to format the event so our inhouse application can accept it and have another command that will do the posting.
Our search head pool nodes were recently upgraded from 6.6.1 to 7.3.0. After the upgrade, the scheduled searches have failed, breaking the reports and alerts. The log shows lots of empty warning mess... See more...
Our search head pool nodes were recently upgraded from 6.6.1 to 7.3.0. After the upgrade, the scheduled searches have failed, breaking the reports and alerts. The log shows lots of empty warning messages: 12-24-2019 23:59:59.227 +0100 WARN SavedSplunker - So DEBUG logging was enabled and we see messages like: 12-24-2019 23:59:59.226 +0100 DEBUG SavedSplunker - savedSearchUpdated field changed: savedsearch_id="admin;search;InternalSplunkdLogAlert", field_name="__ss_type", old_val="scheduled", new_val="" 12-24-2019 23:59:59.225 +0100 DEBUG SavedSplunker - AlertNotifier queued notifications=0, managedSearchCount=0, managedSchedulerSearchCount=0, managedSchedulerRTSearchCount=0 12-24-2019 23:59:59.224 +0100 DEBUG SavedSplunker - lock file already exists, search head skipping execution of: savedsearch_id="nobody;splunk_monitoring_console;DMC Asset - Build Standalone Asset Table", now=1562854500, The Search Head Pooling feature was deprecated for some time, but there was no mention of a bug or behavior change that I could find. What happened?
I have data in a CSV called 25_million_Linie_Rule.csv (example below) host,source,count "INTERFACES_BUILD","/hp547srv1/apps/INTERFACES_BUILD/logs/traces/mxtiming_956675_hp547srv.fr.murex.com_**1... See more...
I have data in a CSV called 25_million_Linie_Rule.csv (example below) host,source,count "INTERFACES_BUILD","/hp547srv1/apps/INTERFACES_BUILD/logs/traces/mxtiming_956675_hp547srv.fr.murex.com_**1254**.log",31436700 I also have data in real time. If the data in rela time is the same as the .csv i don'twant to report it . So an outer join is needed, but i cant get it to work. | tstats count where index="mlc_live" OR index="mxtiming_live" by host source | dedup source | sort 0 - count | head 10 | where count > 25000000 | table host source count | join type=outer source [| inputlookup 25_million_Linie_Rule.csv ] OUTPUT is below (However i get a line i already have in the csv, i should only get one line, the new line not the one i have in the .csv ) host source count INTERFACES_BUILD /hp547srv1/apps/INTERFACES_BUILD/logs/traces/mxtiming_956675_hp547srv.fr.murex.com_**1254**.log 31436700 INTERFACES_BUILD /hp547srv1/apps/INTERFACES_BUILD/logs/traces/mxtiming_956678_hp547srv.fr.murex.com_**1992**.log 26617140 Any help would be great Rob
I have the log snippet below want to extract id and hostname into 2 different fields for example in the expected output from below is id host rmk123 abc.bbb.com@hostname.com... See more...
I have the log snippet below want to extract id and hostname into 2 different fields for example in the expected output from below is id host rmk123 abc.bbb.com@hostname.com rmc143 bdc.ab.cpm@hostname.com [01/05/2020 13:06:21] BAUAJM_I_30031 Client [CA WAAE Auto:15515][5][abc@hostname.com:50019:12.304.593.10] [0x80c91100][02/05/2020 13:06:21.8474][0:rmk123@abc.bbb.com@hostname.com 0] API ID [34] execution started. [01/05/2020 13:06:21] BAUAJM_I_30032 Client [CA WAAE Auto:15509][5][bdc.ab.cpm@hostname.com:12345:19.304.293.10] [0x28bbbfff][02/05/2020 13:06:21.6946][0:rmc143@bdc.ab.cpm@hostname.com 0] API ID [66] execution completed. Total time: 0.132519 seconds. Please assist
I want to be alerted when the motor load on a machine is more than 10 higher or lower than the other motor load (they should usually be balanced and share the load evenly. Unbalanced load could lead ... See more...
I want to be alerted when the motor load on a machine is more than 10 higher or lower than the other motor load (they should usually be balanced and share the load evenly. Unbalanced load could lead to a failure). Would I essentially need to build two alerts for this? One for alerting when Z is 10 higher than Y, and another for when Z motor is 10 lower than Y? Or can it be done in one? If ZLoad > (YLoad +10) when [tag]=STOPPED
Hi, We have requirement where we were asked to retrieve 3 month old data from frozen state into splunk. We need inputs for:- 1) How to identify the buckets inside frozen data for that particul... See more...
Hi, We have requirement where we were asked to retrieve 3 month old data from frozen state into splunk. We need inputs for:- 1) How to identify the buckets inside frozen data for that particular index and that time frame? 2) Will there be impact on indexer performance while thawing the data? 3)Do I need to thaw data on every indexer in cluster? 4)What would be it's impact on cluster. 5)Will it cost on license? 6)Do i have to attach disk on every indexers for thawing data? 7) How will it go back to frozen state Thanks Saurabh
Hi, I have a HF with 250 GB dedicated to /opt directory and this space is being filled up quickly, while I set: indexAndForward = false in outputs.conf file Anyone knows why this happens? ... See more...
Hi, I have a HF with 250 GB dedicated to /opt directory and this space is being filled up quickly, while I set: indexAndForward = false in outputs.conf file Anyone knows why this happens? Thanks,
Problem I have a gui running as javaw.exe and I want to identify when this gui is "Not Responding" Tools I am using the following command to identify it: tasklist /v | findstr javaw.exe W... See more...
Problem I have a gui running as javaw.exe and I want to identify when this gui is "Not Responding" Tools I am using the following command to identify it: tasklist /v | findstr javaw.exe What is working: - Running this command from command prompt - Adding this command to a .bat and running that from command prompt What is not working: - Using input.conf to run the script (it executes and indexes the result, but the "Status" is "Unknown" instead of "Running") What I have tried: - I have looked at permissions and my user and "System" all have adequate permissions (For the .bat, it's parent folders) - I have tried adjusting the SplunkForwarder Service to run as my user instead of System -- It is noteworthy, after running the service as my user, if I run a tasklist command and look up splunk, it still doesn't know what user is running the service Has anyone run into this? Anyone have any advice on how to fix it? Is there a better way to look for a hanging window/process with Splunk?
Hello, I am trying to simplify a search in Splunk taking only my principal endpoints and not the detail transactions, I am using regex to filter this but it still show me all the details, what I want... See more...
Hello, I am trying to simplify a search in Splunk taking only my principal endpoints and not the detail transactions, I am using regex to filter this but it still show me all the details, what I want to see is the availability of the endpoint and not separated for transaction. here is my query. sourcetype="api-core" | rename request.body{}.value.request.http_status_code as http_req_result | convert num(http_req_result) as http_res | where http_res > 0 | rename http_res as "RequestStatus", request.body{}.value.request.endpoint as Endpoint | regex Endpoint="^\W\D+\w.\D+" | stats count(eval(RequestStatus>0)) as total, count(eval(RequestStatus>200)) as errors by Endpoint | eval disponibilidad=(100-(errors/total*100)) | eval disponibilidad = round(disponibilidad,0) | table Endpoint, disponibilidad | sort disponibilidad This returns in this result /accounts/v1/credit_lines/0205087584/transactions 0 /accounts/v1/credit_lines/0205202927/transactions 0 /accounts/v1/credit_lines/0207414358/transactions 0 /accounts/v1/credit_lines/0207440484/transactions 0 /accounts/v1/credit_lines/0209367114/transactions 0 /accounts/v1/credit_lines/0210909021/transactions 0 /accounts/v1/credit_lines/0210997318/transactions 0 /accounts/v1/credit_lines/0211293790/transactions 0 /accounts/v1/credit_lines/0213211449/transactions 0 /accounts/v1/credit_lines/0213285496/transactions 0 /accounts/v1/credit_lines/0213523143/transactions 0 /accounts/v1/credit_lines/0214261457/transactions 0 /authentication/v1/mfa/168831676/otp 0 /clients/v1/clients/165839218/reward_points 0 /clients/v1/clients/121049368 50 /clients/v1/clients/166947472 What I want is to group by endpoint, example "/accounts/v1/credit_lines", "/authentication/v1/mfa", "/clients/v1/clients/" and see all the transactions, not separated. Please your help, Thanks in advance
Primary focus is obtaining SSPR logs ASAP and then learning what else can be ingested.
One out of the eight indexer has the two queues filled up for a couple of hours - parsing and aggregation queues. What can be done besides waiting for them to clear? I believe we use the default queu... See more...
One out of the eight indexer has the two queues filled up for a couple of hours - parsing and aggregation queues. What can be done besides waiting for them to clear? I believe we use the default queue sizes, which are relatively small... index=_internal host=<indexer name> "ERROR" sourcetype=splunkd doesn't show much besides communication errors.
I have my Search Head Cluster authentication working with SAML intergration with our IdP. But currently our IdP sends our first, last names in two different Attributes, shown below... <s... See more...
I have my Search Head Cluster authentication working with SAML intergration with our IdP. But currently our IdP sends our first, last names in two different Attributes, shown below... <saml:Attribute Name="FirstName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" > <saml:AttributeValue xmlns:q5="http://www.w3.org/2001/XMLSchema" p7:type="q5:string" xmlns:p7="http://www.w3.org/2001/XMLSchema-instance" >FIRSTNAME</saml:AttributeValue> </saml:Attribute> <saml:Attribute Name="LastName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" > <saml:AttributeValue xmlns:q6="http://www.w3.org/2001/XMLSchema" p7:type="q6:string" xmlns:p7="http://www.w3.org/2001/XMLSchema-instance" >LASTNAME</saml:AttributeValue> </saml:Attribute> Is there a way in the SAML Configuration for RealName Alias to concatenate these two attribute/values?
Hey everyone, Is there a way to check for which kind of authentication method is being used by splunk in a log? (Splunk itself, SAML or LDAP) Thanks in advanced
Hey everyone, Im trying to come up with a way to get a table stating that, a user was created in splunk had the "Require password change on first login" box checked, is there any way to get th... See more...
Hey everyone, Im trying to come up with a way to get a table stating that, a user was created in splunk had the "Require password change on first login" box checked, is there any way to get that information? Thanks in advanced
I have an inputs.conf stanza that I want to add. I am adding it to monitor all files and sub-directories. Throughout these sub-directories there are .bz2 files that I dont want to ingest. What would ... See more...
I have an inputs.conf stanza that I want to add. I am adding it to monitor all files and sub-directories. Throughout these sub-directories there are .bz2 files that I dont want to ingest. What would be the best way? I was thinking blacklist the bz2 files but that may not be the best way? Maybe there's a better way? [monitor:///var/log/remote/*] blacklist=(*.bz2*) index=nix_os disabled = 0 There could be any number of sub-directories below the remote/ parent directory. Thanks!
The data I am receiving sends multiple JSON objects that have the same keys within them. EDIT: I've added a sample log. This is a single event that i need to count each DELETE_RETIRED_DEVICE, so ... See more...
The data I am receiving sends multiple JSON objects that have the same keys within them. EDIT: I've added a sample log. This is a single event that i need to count each DELETE_RETIRED_DEVICE, so 3 in this case. There are no commas between the JSON objects, they are 3 separate objects. {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200024,"actionAt":1580947200024,"device":{"uuid":"","phoneNumber":"","platform":"Android 8.0"},"actor":{"miUserId":9062,"principal":"","email":"-"},"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":"Global","spacePath":"/1/","actionType":"DELETE_RETIRED_DEVICE","requestedAt":1580947200024,"completedAt":1580947200024,"reason":"Deleted the retired device successfully","status":"Success","objectId":null,"objectType":null,"objectName":null,"subjectId":"","subjectType":"Smartphone","subjectName":" (Android 8.0 - 12406901520)","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200292,"actionAt":1580947200292,"device":null,"actor":null,"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":null,"spacePath":null,"actionType":"SYSTEM_CONFIG_CHANGE","requestedAt":1580947200292,"completedAt":1580947200292,"reason":"Modify Preference lastDeleteRetiredDevicesStatus from Successful, 2020-02-05 00:00:00 UTC to Successful, 2020-02-06 00:00:00 UTC","status":"Success","objectId":null,"objectType":null,"objectName":null,"subjectId":null,"subjectType":"Settings Preferences","subjectName":"System","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200292,"actionAt":1580947200292,"device":null,"actor":null,"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":null,"spacePath":null,"actionType":"DELETE_RETIRED_DEVICE","requestedAt":1580947200292,"completedAt":1580947200292,"reason":"Initiated retired device count = 2, deleted retired device count = 2","status":"Success","objectId":null,"objectType":null,"objectName":null,"subjectId":null,"subjectType":null,"subjectName":"misystem (Source - DailyJob, Bulk deletion - 2)","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200011,"actionAt":1580947200011,"device":null,"actor":null,"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":null,"spacePath":null,"actionType":"DELETE_RETIRED_DEVICE","requestedAt":1580947200011,"completedAt":1580947200011,"reason":"Initiating bulk deletion of 2 retired device(s)","status":"Initiated","objectId":null,"objectType":null,"objectName":null,"subjectId":null,"subjectType":null,"subjectName":"misystem (Source - DailyJob, Bulk deletion - 2)","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} Below is the abbreviated objects: {actionType ... other keys/values} {actionType ... other keys/values} {actionType ... other keys/values}