All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good afternoon! We have a need to send a field with a dot in the message: result.code. But the request in which I specify this field fails. Request: index="main" sourcetype="testsystem-script99" ... See more...
Good afternoon! We have a need to send a field with a dot in the message: result.code. But the request in which I specify this field fails. Request: index="main" sourcetype="testsystem-script99" | transaction maxpause=10m srcMsgId Correlation_srcMsgId messageId result.code | table _time srcMsgId Correlation_srcMsgId messageId result.code | fields _time srcMsgId Correlation_srcMsgId messageId result.code | sort srcMsgId _time | streamstats current=f window=1 values(_time) as prevTime by subject | eval timeDiff=_time-prevTime | delta _time as timeDiff | where (result.code)>0   Error: Error in 'where' command: Type checking failed. The '>' operator received different types. The search job has failed due to an error. You may be able view the job in the Job Inspector.   The error does not occur with the following options: resultcode, result-code, result_code. Tell me please, what could be the problem?
Hi, I'm trying to update a KV store so that the only entries in it will be for consecutive returns from a search.   For example, say the KV store has the existing fields: Title Count ... See more...
Hi, I'm trying to update a KV store so that the only entries in it will be for consecutive returns from a search.   For example, say the KV store has the existing fields: Title Count Daily Check 1 5 Daily Check 2 1 Daily Check 3 1   and the search returns: Label Daily Check 1 Daily Check 3 Daily Check 4   The new KV Store should look like: Title Count Daily Check 1 6 Daily Check 3 2 Daily Check 4 1   Thanks!
Hi everyone, I'm stuck with an issue I can't understand... I created an app that use a custom alert action which generate events to log (this is generating a file under $SPLUNK_HOME$/var/spool/). A... See more...
Hi everyone, I'm stuck with an issue I can't understand... I created an app that use a custom alert action which generate events to log (this is generating a file under $SPLUNK_HOME$/var/spool/). An example of the file could be: Name: 1664448416_92764.stash_sourcetype1       ***SPLUNK*** index="myindex" host="Host1" source="Source1" ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== {...event...}       I have setup an input.conf which is looking for this file:         [batch://$SPLUNK_HOME/var/spool/splunk/...stash_sourcetype1] queue = stashparsing sourcetype = stash_sourcetype1 move_policy = sinkhole crcSalt = <SOURCE>       Under my props.conf, I have :       [stash_sourcetype1] TRUNCATE = 0 # only look for ***SPLUNK*** on the first line HEADER_MODE = firstline # we can summary index past data, but rarely future data MAX_DAYS_AGO = 10000 # 5 years difference between two events MAX_DIFF_SECS_AGO = 155520000 MAX_DIFF_SECS_HENCE = 155520000 TIME_PREFIX = (?m)^\*{3}Common\sAction\sModel\*{3}.*$ MAX_TIMESTAMP_LOOKAHEAD = 25 LEARN_MODEL = false # break .stash_new custom format into events SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = false LINE_BREAKER = (\r?\n==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##==\r?\n) KV_MODE = json TRANSFORMS-0parse_cam_header = orig_action_name_for_stash_cam,orig_sid_for_stash_cam,orig_rid_for_stash_cam,sourcetype_for_stash_cam TRANSFORMS-1sinkhole_cam_header = sinkhole_cam_header       As you can see, I have configured my props.conf to read the first line "***SPLUNK***" in order to recover the index, host and source. However, it continues to log all logs in the "main" index and use default values for "source" and "host". It's like it's ignoring this directive whereas it should take it into account. Does someone knows why it's ignoring this directive please ? I can't find so much documentation on this issue... For your information, I'm working on a standalone version of Splunk Enterprise. Thank you   EDIT: I've just noticed that my events are indexed using the sourcetype "stash_sourcetype1-too_small", this can be the reason why but why is it adding the "too_small" and how can I prevent it ?
Hi Splunkers, I have data like this,  Primary Key_1:      subkey_1 : subvalue_1      subkey_2 : subvalue_2 Primary Key_2:      subkey_1 : subvalue_1      subkey_2 : subvalue_2   I e... See more...
Hi Splunkers, I have data like this,  Primary Key_1:      subkey_1 : subvalue_1      subkey_2 : subvalue_2 Primary Key_2:      subkey_1 : subvalue_1      subkey_2 : subvalue_2   I extract data like this, But I want to see in splunk like key1.subkey_1 = sub_value_1   Btw, this is one event. I try in transforms.conf ($1::$2:):$3 , But I failed. And I want to extract this data, What is the best way extract this for as I want. Or is that possible ?   
Hi Community, on Universal Forwarder I see these logs:   09-29-2022 12:12:17.410 +0200 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61... See more...
Hi Community, on Universal Forwarder I see these logs:   09-29-2022 12:12:17.410 +0200 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=18     I know is related to gz files, in fact splunk is monitoring files gz. In order to increase queue size, I usually push server.conf with new values, like this.   [queue=aeq] maxSize = 2MB     It seems not working because I keep seeing in logs    Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500     Do you know how can be edited this queue size?   Thanks, Marta
Hi,   We have below problem data in lookup:  pan assestId item_deviceId phoneNumber imeID 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man764... See more...
Hi,   We have below problem data in lookup:  pan assestId item_deviceId phoneNumber imeID 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75 ite#0#man76451627ahdgs 8763173699 123456789 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75  ite#0#man76451627ahd75 8736628187 987654321   Now we require new field "Mobile_DeviceId" from "assestId"  for identical row. As per below splunk table: pan assestId item_deviceId phoneNumber imeID Mobile_DeviceId 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75 ite#0#man76451627ahdgs 8763173699 123456789 ass#ABC1#man6558962f 11023 ass#ABC1#man6558962f asst#ABC1#man827631e ite#0#man76451627ahdgs ite#0#man76451627ahd75  ite#0#man76451627ahd75 8736628187 987654321 asst#ABC1#man827631e   Is it possible from SPL?? Please help me to create SPL. My query is:     | inputlookp abc.csv | table pan assestId item_deviceId phoneNumber imeID | eval Mobile_DeviceId=split(assestId," ")|mvexpand Mobile_DeviceId| search Mobile_DeviceId=ass#*      
I'm sure this must be possible, but I can't find a way, unfortunately there are a couple of threads on this with no solution I just want to display a table vertically, with titles as column 1 an... See more...
I'm sure this must be possible, but I can't find a way, unfortunately there are a couple of threads on this with no solution I just want to display a table vertically, with titles as column 1 and values as column 2, like a bullet list. All the information I found suggests that the "transposse" command is the way to go, but I don't know how to achieve it. Any suggestion? Field 1 Field 2 Field 3 Value Value Value   And this is how I'd like to express the table: Fields Values Field 1 Value Field 2 Value Field 3 Value
Hi, we like to know which user is in the local Administrator Group and wich is the active User Account of our windows clients. 1. to get the local admins we use  "netgroup local Administrators" a... See more...
Hi, we like to know which user is in the local Administrator Group and wich is the active User Account of our windows clients. 1. to get the local admins we use  "netgroup local Administrators" and write the output into an textfile. This is the Output.txt: ------------------------------------------------------------------------------- Aliasname Administratoren Beschreibung Administratoren haben uneingeschr„nkten Vollzugriff auf den Computer bzw. die Dom„ne. Mitglieder ------------------------------------------------------------------------------- Administrator AdminX AdminY AdminZ User Der Befehl wurde erfolgreich ausgefhrt. ------------------------------------------------------------------------------- Now there are five Members in the local Administrator group. How can we get these values into fields?  Like: localAdmin = Administrator localAdmin = AdminX localAdmin = AdminY localAdmin = AdminZ ...   2. We use "query user" to get the active user and write the output in a textfile This is the output.txt: BENUTZERNAME SITZUNGSNAME ID STATUS LEERLAUF ANMELDEZEIT >user console 1 Aktiv 1:07 26.09.2022 12:41 How can we extract these fields? Like: Benutzername = user Sitzungsname = console ID = Aktiv ...   Thank you in advance! Dominik
The splunkd health has the following message:  The percentage of non-high priority searches skipped (97%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk in... See more...
The splunkd health has the following message:  The percentage of non-high priority searches skipped (97%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance.  How do find what searches are high priority and what are non-high priority search in splunk?
I have a string of data and i've created regex to break down that set into different fields. There are date values within it (start_date and end_date) but format is ddmmyyy i.e. 2901012001.How can i ... See more...
I have a string of data and i've created regex to break down that set into different fields. There are date values within it (start_date and end_date) but format is ddmmyyy i.e. 2901012001.How can i convert it into DD-MM-YYYY so Splunk recognises it as a date or can be shown in that date format? Ideally i'd like that to be down on ingestion. I have a props.conf and transforms.conf file for the app this sits in
is there any function works like group by grouping sets in Mysql? So that I can get a value from each group and a total one
Hi Team, We are planning to deploy synthetic monitoring in Appd. So wanted to know the pre-requisites to start with the same. If Someone can help us with the detailed instructions it would be great... See more...
Hi Team, We are planning to deploy synthetic monitoring in Appd. So wanted to know the pre-requisites to start with the same. If Someone can help us with the detailed instructions it would be great.  Also wanted to check if we have any synthetic recorder available within Appd to record the flow\journeys?  Thanks, Sravan Kumar
I have the below string in my error log  {"@odata.context":"https://apistaging.payspace.com/odata/v1.1/11846/$metadata#EmployeePosition/$entity","Message":"Invalid value for field Directly reports ... See more...
I have the below string in my error log  {"@odata.context":"https://apistaging.payspace.com/odata/v1.1/11846/$metadata#EmployeePosition/$entity","Message":"Invalid value for field Directly reports to Employee Number.","Details":[{"Message":"Invalid value for field Directly reports to Employee Number."}],"Success":false} I have the code as shown below | makeresults | eval test = "{"@odata.context":"https://apistaging.payspace.com/odata/v1.1/11846/$metadata#EmployeePosition/$entity","Message":"Invalid value for field Directly reports to Employee Number.","Details":[{"Message":"Invalid value for field Directly reports to Employee Number."}],"Success":false}" | rex field=test max_match=0 "(?<test>\w+)" | eval test = mvjoin (test, "-") Now the code works by removing all the wild characters, but throws an error as I have double quotes. So need to know how i can ignore the quotes or replace it and then only need to get the string message which i have made in bold.   
Hey Folks, I am new to Dashboard Studio. Can we create a drilldown from the Bar chart each value to update the search log table respectively based on the individual bar's selected. Or  Can we... See more...
Hey Folks, I am new to Dashboard Studio. Can we create a drilldown from the Bar chart each value to update the search log table respectively based on the individual bar's selected. Or  Can we create this only using Classic Dashboard
Hey Splunkers!! Is there any way to export my custom visualization in PDF format --- BoxPlot I check over the Splunkbase found some apps for it  .. but is there any other way to export my custom ... See more...
Hey Splunkers!! Is there any way to export my custom visualization in PDF format --- BoxPlot I check over the Splunkbase found some apps for it  .. but is there any other way to export my custom visualization in PDF format.... Thanks. ---------- RIL  
Hi Team,  I am unable to send data to splunk from GCP.  To give a background, I have created a free-trial splunk cloud platform (14 days) and am trying to integrate splunk with GCP.  My Splunk ... See more...
Hi Team,  I am unable to send data to splunk from GCP.  To give a background, I have created a free-trial splunk cloud platform (14 days) and am trying to integrate splunk with GCP.  My Splunk Cloud Platform URL:    https://prd-p-svf32.splunkcloud.com I have created a HEC token in splunk and am specifying the HEC URL and the token in my GCP code, but it fails to connect to splunk. I have tried the below urls, but nothing worked. Can someone help on what I am missing here.  https://prd-p-svf32.splunkcloud.com/ http://prd-p-svf32.splunkcloud.com/ https://prd-p-svf32:8088/ http://si-i-0a1323473acd7871c.prd-p-svf32.splunkcloud.com/ http://si-i-0a1323473acd7871c.prd-p-svf32.splunkcloud.com/ https://si-i-0a1323473acd7871c.prd-p-svf32.splunkcloud.com https://prd-p-svf32.splunkcloud.com:8088 https://prd-p-svf32.splunkcloud.com/services/collector/event https://prd-p-svf32.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-svf32.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-svf32.splunkcloud.com:8088 https://http-inputs.prd-p-svf32.splunkcloud.com:8088/gcp-collector-scf https://http-inputs.prd-p-svf32.splunkcloud.com/gcp-collector-scf https://prd-p-svf32.splunkcloud.com/en-US/manager/search/http-eventcollector https://prd-p-svf32.splunkcloud.com:8088/en-US/account
table A table B  I know there are lots of ways to spread the table from table B to table A . Is there ant method to transform table A to table B in Splunk without losing any data? like ... See more...
table A table B  I know there are lots of ways to spread the table from table B to table A . Is there ant method to transform table A to table B in Splunk without losing any data? like unite in R, pivot in BQ?  
I have an error when uploading the data in archive format (gz file) to Splunk Enterprise in Linux environment and would like to know how to resolve it. The error message is as follows: Error decompr... See more...
I have an error when uploading the data in archive format (gz file) to Splunk Enterprise in Linux environment and would like to know how to resolve it. The error message is as follows: Error decompressing '/opt/splunk/var/run/splunk/dispach/xxxx/xxxx/xxx.gz' with command '/bin/sh -c "gzip-cd -":PID XXXX exited with code 2 What we would like to achieve is as follows: ・I want to prevent errors when uploading gz files to Splunk Enterprise. ・I can import gz files to Splunk Enterprise on Windows without any problem.
case_S56_search_Get_T01_search,{"success":false "message":"Note not found: 52229548" "messageCode":"**" "localizedMessage":"Note not found: *****" "responseObject":null "warning":null}   I want t... See more...
case_S56_search_Get_T01_search,{"success":false "message":"Note not found: 52229548" "messageCode":"**" "localizedMessage":"Note not found: *****" "responseObject":null "warning":null}   I want to display above string  comma separated in two column in splunk under events or statistice or visualization I have thousands of string similar like like with different names of first string (case_S56_search_Get_T01_search)   index=**** source=*ResponseAnalyzer* | rex field=ExistingFieldMaybe_raw "[,\s]+(?<MyCaptureFieldName>[^,]+)" Please help me
Appreciate your help on the MLTK  on fit error :  Error in 'fit' command: External search command exited unexpectedly with non-zero error code 1. Splunk  Version: 9.0.1 installed : Splunk_... See more...
Appreciate your help on the MLTK  on fit error :  Error in 'fit' command: External search command exited unexpectedly with non-zero error code 1. Splunk  Version: 9.0.1 installed : Splunk_SA_Scientific_Python_linux_x86_64  and Machine Learning Toolkit (MLTK) app  and updated latest lib like numpy,scipy,scikit_learn  at location but no luck /etc/apps/Splunk_SA_Scientific_Python_darwin_x86_64/bin/darwin_x86_64/lib/python3.8/site-packages/