All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a string of data and i've created regex to break down that set into different fields. There are date values within it (start_date and end_date) but format is ddmmyyy i.e. 2901012001.How can i ... See more...
I have a string of data and i've created regex to break down that set into different fields. There are date values within it (start_date and end_date) but format is ddmmyyy i.e. 2901012001.How can i convert it into DD-MM-YYYY so Splunk recognises it as a date or can be shown in that date format? Ideally i'd like that to be down on ingestion. I have a props.conf and transforms.conf file for the app this sits in
is there any function works like group by grouping sets in Mysql? So that I can get a value from each group and a total one
Hi Team, We are planning to deploy synthetic monitoring in Appd. So wanted to know the pre-requisites to start with the same. If Someone can help us with the detailed instructions it would be great... See more...
Hi Team, We are planning to deploy synthetic monitoring in Appd. So wanted to know the pre-requisites to start with the same. If Someone can help us with the detailed instructions it would be great.  Also wanted to check if we have any synthetic recorder available within Appd to record the flow\journeys?  Thanks, Sravan Kumar
I have the below string in my error log  {"@odata.context":"https://apistaging.payspace.com/odata/v1.1/11846/$metadata#EmployeePosition/$entity","Message":"Invalid value for field Directly reports ... See more...
I have the below string in my error log  {"@odata.context":"https://apistaging.payspace.com/odata/v1.1/11846/$metadata#EmployeePosition/$entity","Message":"Invalid value for field Directly reports to Employee Number.","Details":[{"Message":"Invalid value for field Directly reports to Employee Number."}],"Success":false} I have the code as shown below | makeresults | eval test = "{"@odata.context":"https://apistaging.payspace.com/odata/v1.1/11846/$metadata#EmployeePosition/$entity","Message":"Invalid value for field Directly reports to Employee Number.","Details":[{"Message":"Invalid value for field Directly reports to Employee Number."}],"Success":false}" | rex field=test max_match=0 "(?<test>\w+)" | eval test = mvjoin (test, "-") Now the code works by removing all the wild characters, but throws an error as I have double quotes. So need to know how i can ignore the quotes or replace it and then only need to get the string message which i have made in bold.   
Hey Folks, I am new to Dashboard Studio. Can we create a drilldown from the Bar chart each value to update the search log table respectively based on the individual bar's selected. Or  Can we... See more...
Hey Folks, I am new to Dashboard Studio. Can we create a drilldown from the Bar chart each value to update the search log table respectively based on the individual bar's selected. Or  Can we create this only using Classic Dashboard
Hey Splunkers!! Is there any way to export my custom visualization in PDF format --- BoxPlot I check over the Splunkbase found some apps for it  .. but is there any other way to export my custom ... See more...
Hey Splunkers!! Is there any way to export my custom visualization in PDF format --- BoxPlot I check over the Splunkbase found some apps for it  .. but is there any other way to export my custom visualization in PDF format.... Thanks. ---------- RIL  
Hi Team,  I am unable to send data to splunk from GCP.  To give a background, I have created a free-trial splunk cloud platform (14 days) and am trying to integrate splunk with GCP.  My Splunk ... See more...
Hi Team,  I am unable to send data to splunk from GCP.  To give a background, I have created a free-trial splunk cloud platform (14 days) and am trying to integrate splunk with GCP.  My Splunk Cloud Platform URL:    https://prd-p-svf32.splunkcloud.com I have created a HEC token in splunk and am specifying the HEC URL and the token in my GCP code, but it fails to connect to splunk. I have tried the below urls, but nothing worked. Can someone help on what I am missing here.  https://prd-p-svf32.splunkcloud.com/ http://prd-p-svf32.splunkcloud.com/ https://prd-p-svf32:8088/ http://si-i-0a1323473acd7871c.prd-p-svf32.splunkcloud.com/ http://si-i-0a1323473acd7871c.prd-p-svf32.splunkcloud.com/ https://si-i-0a1323473acd7871c.prd-p-svf32.splunkcloud.com https://prd-p-svf32.splunkcloud.com:8088 https://prd-p-svf32.splunkcloud.com/services/collector/event https://prd-p-svf32.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-svf32.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-svf32.splunkcloud.com:8088 https://http-inputs.prd-p-svf32.splunkcloud.com:8088/gcp-collector-scf https://http-inputs.prd-p-svf32.splunkcloud.com/gcp-collector-scf https://prd-p-svf32.splunkcloud.com/en-US/manager/search/http-eventcollector https://prd-p-svf32.splunkcloud.com:8088/en-US/account
table A table B  I know there are lots of ways to spread the table from table B to table A . Is there ant method to transform table A to table B in Splunk without losing any data? like ... See more...
table A table B  I know there are lots of ways to spread the table from table B to table A . Is there ant method to transform table A to table B in Splunk without losing any data? like unite in R, pivot in BQ?  
I have an error when uploading the data in archive format (gz file) to Splunk Enterprise in Linux environment and would like to know how to resolve it. The error message is as follows: Error decompr... See more...
I have an error when uploading the data in archive format (gz file) to Splunk Enterprise in Linux environment and would like to know how to resolve it. The error message is as follows: Error decompressing '/opt/splunk/var/run/splunk/dispach/xxxx/xxxx/xxx.gz' with command '/bin/sh -c "gzip-cd -":PID XXXX exited with code 2 What we would like to achieve is as follows: ・I want to prevent errors when uploading gz files to Splunk Enterprise. ・I can import gz files to Splunk Enterprise on Windows without any problem.
case_S56_search_Get_T01_search,{"success":false "message":"Note not found: 52229548" "messageCode":"**" "localizedMessage":"Note not found: *****" "responseObject":null "warning":null}   I want t... See more...
case_S56_search_Get_T01_search,{"success":false "message":"Note not found: 52229548" "messageCode":"**" "localizedMessage":"Note not found: *****" "responseObject":null "warning":null}   I want to display above string  comma separated in two column in splunk under events or statistice or visualization I have thousands of string similar like like with different names of first string (case_S56_search_Get_T01_search)   index=**** source=*ResponseAnalyzer* | rex field=ExistingFieldMaybe_raw "[,\s]+(?<MyCaptureFieldName>[^,]+)" Please help me
Appreciate your help on the MLTK  on fit error :  Error in 'fit' command: External search command exited unexpectedly with non-zero error code 1. Splunk  Version: 9.0.1 installed : Splunk_... See more...
Appreciate your help on the MLTK  on fit error :  Error in 'fit' command: External search command exited unexpectedly with non-zero error code 1. Splunk  Version: 9.0.1 installed : Splunk_SA_Scientific_Python_linux_x86_64  and Machine Learning Toolkit (MLTK) app  and updated latest lib like numpy,scipy,scikit_learn  at location but no luck /etc/apps/Splunk_SA_Scientific_Python_darwin_x86_64/bin/darwin_x86_64/lib/python3.8/site-packages/           
I have an application that sends logs to Splunk every few seconds. These logs are "snapshots" which provide a static view of the system at the time they were taken/sent to Splunk. I am attempting t... See more...
I have an application that sends logs to Splunk every few seconds. These logs are "snapshots" which provide a static view of the system at the time they were taken/sent to Splunk. I am attempting to get the latest rows from Splunk and present them in a table. Latest rows are determined by _time. In the example below I want to retrieve the two last rows because they have the highest _time value. Any help would be appreciated. _time Name Status 9/28/22 8:14:08.968 PM SPID 1 Queued 9/28/22 8:14:08.968 PM SPID 2 Started 9/28/22 8:14:08.968 PM SPID 3 Failing 9/28/22 8:14:12.968 PM SPID 1 Started 9/28/22 8:14:12.968 PM SPID 2 Started  
Hi,  I have a question related to defining volumes especially the coldvolume in our case.  I want to point the coldvolume to a unique location on network attached storage.  Example we have 3 i... See more...
Hi,  I have a question related to defining volumes especially the coldvolume in our case.  I want to point the coldvolume to a unique location on network attached storage.  Example we have 3 indexers with the following hostnames idx01 idx02 idx03 say we have a local mount point on every indexer which points to the NAS mynas /mnt/coldvolume -> nfs://mynas Option 1 Can I then define a volume in /etc/system/local/indexes.conf on every indexer.  On idx01: [volume:coldvolume] path = /mnt/coldvolume/idx01/ On idx02 [volume:coldvolume] path = /mnt/coldvolume/idx02/ On idx03 [volume:coldvolume] path = /mnt/coldvolume/idx03/ The indexes that are then distributed to all indexers in this cluster are of the format  [myniceindex] coldPath = volume:coldvolume/myniceindex/colddb Option 2 As an alternative I can also mount to the subfolders on the NAS by modifying /etc/fstab and letting the /mnt/coldvolume point to nfs://mynas/idx{01,02,03} I can then distribute the volume definition to all 3 indexers like this [volume:coldvolume] path = /mnt/coldvolume/ Question: Is option 1 a valid / supported configuration for a Splunk indexer cluster? Question: Or is option 2 the best practice? Regards Rob van de Voort
Hi, I have been able to enter the following data in splunk through key value with the following format:           sourcetype="excel_page_10" mail_sender="jordi@jordilazo.com" mail_recipien... See more...
Hi, I have been able to enter the following data in splunk through key value with the following format:           sourcetype="excel_page_10" mail_sender="jordi@jordilazo.com" mail_recipient="lazo@jordilazo.es" mail_date_ep="1635qqqqwe2160816.0" mail_nummails="1222asdasd.adasdqweqw" mail_level="0@qw....." mail_info="NO" mail_removal="NO" mail_area="Miami" mail_subject="RE: NMXWZFOG< >VSTI" mail_id="XXX-KKKK-NNNN-KNZI" mail_reviewcomment="Comentario:ÑC<AZR=@P""\a"            As can be seen in the image, splunk has been able to correctly classify all the fields and value. However it has created a new field called AZR with the value @P. This is because it has detected an = inside the comment review value and created it. What do I have to modify in the props and transform so that it detects the entire reviewcomment field as 1 single value and includes the symbol =?
Hello, Background story: I have a data set that is being ingested by Splunk by the HTTP event collector, when this connector was added to Splunk, the connector started ingesting current logs from... See more...
Hello, Background story: I have a data set that is being ingested by Splunk by the HTTP event collector, when this connector was added to Splunk, the connector started ingesting current logs from the appliance and no historical logs prior to that day it was connected. Fast forward, in order to account for those missing logs a lookup was created. The dataset contains "Issue IDs" with its corresponding "status" What I am trying to do is, combine the data from the lookup with the index to look pull back the latest value for "status" for the "Issue ID". However, when the query is ran, the value from the lookup always trumps the recent data from the index, even though the lookup is 1 week older then the log comparing it. Without the lookup, the query works fine, it is able to pull the latest values for "status". This is how I have formulated the query: |inputlookup OpenIssues |fields "Issue ID",Status,Severity |rename Status AS issue.status |rename "Issue ID" as issue.id |rename Severity as issue.severity |append [search index="dataset A" sourcetype=_json |fields  issue.id issue.status |stats latest(issue.status)
'm having the same issue working in Dashboard studio, I am trying to increase the font size of the records in the table . I added the  "fontSize" Attribute to the table. And like the suggestion... See more...
'm having the same issue working in Dashboard studio, I am trying to increase the font size of the records in the table . I added the  "fontSize" Attribute to the table. And like the suggestions above the Layout is absolute. Below are the screenshots of the code. Any suggestions on  how to increase the font size?     "viz_1qOASu7V": {             "type": "splunk.table",             "title": "",             "description": "",             "dataSources": {                 "primary": "ds_blah"             },             "options": {                 "count": 15,                 "fontSize": 50             }     "layout": {         "type": "absolute",         "options": {             "height": 2500,             "width": 2500,             "backgroundImage": {                 "sizeType": "cover",                 "x": 0,                 "y": 0,                 "src": "/backgroungimage.jpeg"             },             "display": "auto-scale"         },            
Trying to build a search looking for sporadic servers in the past 14 days, here is my search so far. | tstats count as hourcount where (index=_* OR index=*) by _time, host span=1h | appendpipe [ | ... See more...
Trying to build a search looking for sporadic servers in the past 14 days, here is my search so far. | tstats count as hourcount where (index=_* OR index=*) by _time, host span=1h | appendpipe [ | stats count by host | addinfo | eval _time = mvappend(info_mintime,info_maxtime) | stats values(_time) as Time by host | mvexpand Time | rename Time as _time ] | sort 0 _time host | streamstats time_window=24h count as skipno by host | where skipno = 1 | stats sum(skipno) as count by host | eval mySporadicFlag = if(count=1,"no","yes")   But how the streamstats is set up, and the filtering. Every host starts at 1, the first time an event was encountered in the first 14 days. So it's flagging all my hosts as sporadic despite  there being no gap. Any assistance?
Hello, I just registered for a trial of Splunk Cloud. For some reason it generated 4 instances for me, and I can't access any of them. I continuously get a 503 Error:  Too many HTTP threads (1267) ... See more...
Hello, I just registered for a trial of Splunk Cloud. For some reason it generated 4 instances for me, and I can't access any of them. I continuously get a 503 Error:  Too many HTTP threads (1267) already running, try again later The server can not presently handle the given request.   I see some past answers with API issues but I literally have done nothing in splunk yet. It has this error just from the instances being spun up. 
Hello everyone,  I was updating our licenses and I am still new to Splunk, so I accidentally deleted the auto_generated_pool. I recreated the pool to match the auto_generated_one, but I would just l... See more...
Hello everyone,  I was updating our licenses and I am still new to Splunk, so I accidentally deleted the auto_generated_pool. I recreated the pool to match the auto_generated_one, but I would just like to know if I might have broken anything or if there is anyway to get Splunk to generate another auto_generated_pool? I checked our indexers and I performed a few searches and it looks like we are still gathering data. 
Hello All , thanks for the help, my exemple:     logStreamName:  _time message 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo 09bfc06d... See more...
Hello All , thanks for the help, my exemple:     logStreamName:  _time message 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo1 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:23:23.295 AM Erreur 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:23:24.195 AM allo2 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:23:24.195 AM allo4   I want get this output,  for apply after regex for extract some line around the erreur msg logStreamName: _time ms 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo  allo1 Error allo2 allo4   if i try that search index="bnc_6261_pr_log_conf" logStreamName="*/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" | stats count by logStreamName | map maxsearches=20 search=" search index="bnc_6261_pr_log_conf" logStreamName=$logStreamName$ | eval ms=_time + message| stats values(ms) by logStreamName,_time "| transaction logStreamName | rex field=ms "(?<ERROR_MESSAGE>.{0,50}Error.{0,50})" it is not working if I perform the rex on msg, if I try use rex on logStreamName with different search string it is work, i try use transaction command for concact msg. and I create ms variable  for add time to my msg , it force too keep the order of message, it the only whey a found. Please help me.