All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an application that sends logs to Splunk every few seconds. These logs are "snapshots" which provide a static view of the system at the time they were taken/sent to Splunk. I am attempting t... See more...
I have an application that sends logs to Splunk every few seconds. These logs are "snapshots" which provide a static view of the system at the time they were taken/sent to Splunk. I am attempting to get the latest rows from Splunk and present them in a table. Latest rows are determined by _time. In the example below I want to retrieve the two last rows because they have the highest _time value. Any help would be appreciated. _time Name Status 9/28/22 8:14:08.968 PM SPID 1 Queued 9/28/22 8:14:08.968 PM SPID 2 Started 9/28/22 8:14:08.968 PM SPID 3 Failing 9/28/22 8:14:12.968 PM SPID 1 Started 9/28/22 8:14:12.968 PM SPID 2 Started  
Hi,  I have a question related to defining volumes especially the coldvolume in our case.  I want to point the coldvolume to a unique location on network attached storage.  Example we have 3 i... See more...
Hi,  I have a question related to defining volumes especially the coldvolume in our case.  I want to point the coldvolume to a unique location on network attached storage.  Example we have 3 indexers with the following hostnames idx01 idx02 idx03 say we have a local mount point on every indexer which points to the NAS mynas /mnt/coldvolume -> nfs://mynas Option 1 Can I then define a volume in /etc/system/local/indexes.conf on every indexer.  On idx01: [volume:coldvolume] path = /mnt/coldvolume/idx01/ On idx02 [volume:coldvolume] path = /mnt/coldvolume/idx02/ On idx03 [volume:coldvolume] path = /mnt/coldvolume/idx03/ The indexes that are then distributed to all indexers in this cluster are of the format  [myniceindex] coldPath = volume:coldvolume/myniceindex/colddb Option 2 As an alternative I can also mount to the subfolders on the NAS by modifying /etc/fstab and letting the /mnt/coldvolume point to nfs://mynas/idx{01,02,03} I can then distribute the volume definition to all 3 indexers like this [volume:coldvolume] path = /mnt/coldvolume/ Question: Is option 1 a valid / supported configuration for a Splunk indexer cluster? Question: Or is option 2 the best practice? Regards Rob van de Voort
Hi, I have been able to enter the following data in splunk through key value with the following format:           sourcetype="excel_page_10" mail_sender="jordi@jordilazo.com" mail_recipien... See more...
Hi, I have been able to enter the following data in splunk through key value with the following format:           sourcetype="excel_page_10" mail_sender="jordi@jordilazo.com" mail_recipient="lazo@jordilazo.es" mail_date_ep="1635qqqqwe2160816.0" mail_nummails="1222asdasd.adasdqweqw" mail_level="0@qw....." mail_info="NO" mail_removal="NO" mail_area="Miami" mail_subject="RE: NMXWZFOG< >VSTI" mail_id="XXX-KKKK-NNNN-KNZI" mail_reviewcomment="Comentario:ÑC<AZR=@P""\a"            As can be seen in the image, splunk has been able to correctly classify all the fields and value. However it has created a new field called AZR with the value @P. This is because it has detected an = inside the comment review value and created it. What do I have to modify in the props and transform so that it detects the entire reviewcomment field as 1 single value and includes the symbol =?
Hello, Background story: I have a data set that is being ingested by Splunk by the HTTP event collector, when this connector was added to Splunk, the connector started ingesting current logs from... See more...
Hello, Background story: I have a data set that is being ingested by Splunk by the HTTP event collector, when this connector was added to Splunk, the connector started ingesting current logs from the appliance and no historical logs prior to that day it was connected. Fast forward, in order to account for those missing logs a lookup was created. The dataset contains "Issue IDs" with its corresponding "status" What I am trying to do is, combine the data from the lookup with the index to look pull back the latest value for "status" for the "Issue ID". However, when the query is ran, the value from the lookup always trumps the recent data from the index, even though the lookup is 1 week older then the log comparing it. Without the lookup, the query works fine, it is able to pull the latest values for "status". This is how I have formulated the query: |inputlookup OpenIssues |fields "Issue ID",Status,Severity |rename Status AS issue.status |rename "Issue ID" as issue.id |rename Severity as issue.severity |append [search index="dataset A" sourcetype=_json |fields  issue.id issue.status |stats latest(issue.status)
'm having the same issue working in Dashboard studio, I am trying to increase the font size of the records in the table . I added the  "fontSize" Attribute to the table. And like the suggestion... See more...
'm having the same issue working in Dashboard studio, I am trying to increase the font size of the records in the table . I added the  "fontSize" Attribute to the table. And like the suggestions above the Layout is absolute. Below are the screenshots of the code. Any suggestions on  how to increase the font size?     "viz_1qOASu7V": {             "type": "splunk.table",             "title": "",             "description": "",             "dataSources": {                 "primary": "ds_blah"             },             "options": {                 "count": 15,                 "fontSize": 50             }     "layout": {         "type": "absolute",         "options": {             "height": 2500,             "width": 2500,             "backgroundImage": {                 "sizeType": "cover",                 "x": 0,                 "y": 0,                 "src": "/backgroungimage.jpeg"             },             "display": "auto-scale"         },            
Trying to build a search looking for sporadic servers in the past 14 days, here is my search so far. | tstats count as hourcount where (index=_* OR index=*) by _time, host span=1h | appendpipe [ | ... See more...
Trying to build a search looking for sporadic servers in the past 14 days, here is my search so far. | tstats count as hourcount where (index=_* OR index=*) by _time, host span=1h | appendpipe [ | stats count by host | addinfo | eval _time = mvappend(info_mintime,info_maxtime) | stats values(_time) as Time by host | mvexpand Time | rename Time as _time ] | sort 0 _time host | streamstats time_window=24h count as skipno by host | where skipno = 1 | stats sum(skipno) as count by host | eval mySporadicFlag = if(count=1,"no","yes")   But how the streamstats is set up, and the filtering. Every host starts at 1, the first time an event was encountered in the first 14 days. So it's flagging all my hosts as sporadic despite  there being no gap. Any assistance?
Hello, I just registered for a trial of Splunk Cloud. For some reason it generated 4 instances for me, and I can't access any of them. I continuously get a 503 Error:  Too many HTTP threads (1267) ... See more...
Hello, I just registered for a trial of Splunk Cloud. For some reason it generated 4 instances for me, and I can't access any of them. I continuously get a 503 Error:  Too many HTTP threads (1267) already running, try again later The server can not presently handle the given request.   I see some past answers with API issues but I literally have done nothing in splunk yet. It has this error just from the instances being spun up. 
Hello everyone,  I was updating our licenses and I am still new to Splunk, so I accidentally deleted the auto_generated_pool. I recreated the pool to match the auto_generated_one, but I would just l... See more...
Hello everyone,  I was updating our licenses and I am still new to Splunk, so I accidentally deleted the auto_generated_pool. I recreated the pool to match the auto_generated_one, but I would just like to know if I might have broken anything or if there is anyway to get Splunk to generate another auto_generated_pool? I checked our indexers and I performed a few searches and it looks like we are still gathering data. 
Hello All , thanks for the help, my exemple:     logStreamName:  _time message 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo 09bfc06d... See more...
Hello All , thanks for the help, my exemple:     logStreamName:  _time message 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo1 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:23:23.295 AM Erreur 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:23:24.195 AM allo2 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:23:24.195 AM allo4   I want get this output,  for apply after regex for extract some line around the erreur msg logStreamName: _time ms 09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout 9/20/2211:22:23.295 AM allo  allo1 Error allo2 allo4   if i try that search index="bnc_6261_pr_log_conf" logStreamName="*/i-09bfc06d1ff10cb79/config_Ec2_CECIO_Linux/stdout" | stats count by logStreamName | map maxsearches=20 search=" search index="bnc_6261_pr_log_conf" logStreamName=$logStreamName$ | eval ms=_time + message| stats values(ms) by logStreamName,_time "| transaction logStreamName | rex field=ms "(?<ERROR_MESSAGE>.{0,50}Error.{0,50})" it is not working if I perform the rex on msg, if I try use rex on logStreamName with different search string it is work, i try use transaction command for concact msg. and I create ms variable  for add time to my msg , it force too keep the order of message, it the only whey a found. Please help me.
Hi all, I would like to know if it is possible to deploy a single new app from the Deployer to a Search Head without losing all the apps already installed in the Search Head. Now the shcluster / ... See more...
Hi all, I would like to know if it is possible to deploy a single new app from the Deployer to a Search Head without losing all the apps already installed in the Search Head. Now the shcluster / apps directory is empty and I would like to know if I can just insert the new app and push it with the command "splunk apply shcluster-bundle" with some special arguments that allow me not to delete all the other apps. Thanks, Mauro 
I am using two macros in a search however, I want to use them in a way that IF they are broken or not available the search will not fail to complete?  
We have AV logs that send the detection and the block separately. I'm trying to create a query where I can take each incident_id (which has a log with a block, and a log with a detection) and have th... See more...
We have AV logs that send the detection and the block separately. I'm trying to create a query where I can take each incident_id (which has a log with a block, and a log with a detection) and have the query verify that both of logs are there. So it would table like incident_id ifDetectionFound ifBlockFound. I can't seem to wrap my mind around how to have it search the logs again for a block. Right now I'm just indexing for action=block OR action=Detection, but it still requires me to compare incident_ids (which are quite long random bits of numbers)
I have the below search results that will consist of 2 different types of log formats or strings. Log 1:  "MESSAGE "(?<JSON>\{.*\})" and Log 2 : "Published Event for txn_id (?<tx_id>\w+)". Both of th... See more...
I have the below search results that will consist of 2 different types of log formats or strings. Log 1:  "MESSAGE "(?<JSON>\{.*\})" and Log 2 : "Published Event for txn_id (?<tx_id>\w+)". Both of these formats or logs or messages will be present in the result of the below search_results. I want to filter only those logs with Log 1 format that has the same transactionid as the one in the other Log 2 format. I am trying to run the below query. However its giving zero results even though there are common transactionids between these 2 log formats. Is there any way to achieve this?    {search_results} | rex field=MESSAGE "(?<JSON>\{.*\})" | rex field=MESSAGE "Published Event for txn_id (?<tx_id>\w+)" | spath input=JSON | where transaction_id == tx_id  
Facing issues ingesting required  certificates data using certificate transparency addon. Add-on used : https://splunkbase.splunk.com/app/4006/#/overview Version: 1.3.1 We have added inputs whe... See more...
Facing issues ingesting required  certificates data using certificate transparency addon. Add-on used : https://splunkbase.splunk.com/app/4006/#/overview Version: 1.3.1 We have added inputs where we see that our certificate logs would be there but it seems the addon is not capturing all the logs from that input. We do not see any errors as well.    Query that we are using to check for our company related certificates is : index=certificate sourcetype=ct:log LogEntryType=0 LeafCertificate.x509_extensions.subjectAltName{}=*company_domain_name* | dedup LeafCertificate.serial | table LeafCertificate.x509_extensions.subjectAltName{}, LeafCertificate.validity.notbefore, LeafCertificate.validity.notafter source| rename LeafCertificate.x509_extensions.subjectAltName{} as LeafCertificate_Name LeafCertificate.validity.notbefore as Valid_From LeafCertificate.validity.notafter as Valid_Till source as Source    
Hello Splunkers, I have a quick question, is there a Splunk command to list all receiving port enable on a specific instance ?  I know you can check that from the GUI under "Settings/Forwarding ... See more...
Hello Splunkers, I have a quick question, is there a Splunk command to list all receiving port enable on a specific instance ?  I know you can check that from the GUI under "Settings/Forwarding and receiving/Configure Receiving". There is also a CLI to a add a receiving port :      splunk enable listen <port> -auth <username>:<password>     Info I got here : https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/Enableareceiver So I am wondering if there is a command to list all port enable on a machine.   Thanks a lot, GaetanVP  
Hi Team, I need to set up an alert for node-level transaction volume. Please suggest a good method. Thanks Revised for grammar and clarity Claudia Landivar, Community Manager & Editor
Hello, I have to index a log file in linux server in to one index but need to have two different sourcetype. Is it possible?? I tried but when compare  index = audit_idx sourcetype = linux_audit a... See more...
Hello, I have to index a log file in linux server in to one index but need to have two different sourcetype. Is it possible?? I tried but when compare  index = audit_idx sourcetype = linux_audit and index =audit_idx sourcetype = linux_audit_mll , results are not same there are few logs missing in each. Want to know why its happening. Thanks in advance..  
Hi, Im dummy in Splunk and I have one doubt. Maybe you can help me. I want to insert in an index that I have created some data that I have obtained when executing a script in python, so the res... See more...
Hi, Im dummy in Splunk and I have one doubt. Maybe you can help me. I want to insert in an index that I have created some data that I have obtained when executing a script in python, so the result of the script is the following:     sourcetype="script_emails" mail_sender="jordi@jordilazo.com" mail_recipient="jordilazo2@jordilazo.es" mail_date="10-10-2022" mail_subject="RE: NMXWZFOG< >VSTI" mail_reviewcomment="Comment:ÑC<AZR=@P"&"\A"     How do I configure the inputs, props and transform so that it is uploaded correctly in Splunk? - Field - Value - Source - Sourcetype I have this:   inputs.conf     [script://"script.py"] disabled = 0 index = python_emails interval = 22 13 * * * source = ????(I dont know what to insert here) sourcetype = mytest       transform.conf     [test_sourcetype] REGEX = sourcetype="(\w+)" FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype [test_comment] REGEX = mail_reviewcomment="(.+)" FORMAT = mail_reviewcomment::$1 WRITE_META = true       props.conf     [mytest] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true TIME_PREFIX = timestamp= MAX_TIMESTAMP_LOOKAHEAD = 10 CHARSET = UTF-8 KV_MODE = auto TRANSFORMS-test_sourcetype = test_sourcetype,test_comment     Thanks for you help!
Hello, We keep getting the errors from one of our indexers (there are 3 in the cluster, only one is affected): ERROR TcpInputConfig [60483 TcpListener] - SSL context not found. Will not open splun... See more...
Hello, We keep getting the errors from one of our indexers (there are 3 in the cluster, only one is affected): ERROR TcpInputConfig [60483 TcpListener] - SSL context not found. Will not open splunk to splunk (SSL) IPv4 port 9997 All indexers have the same SSL config: /opt/splunk/etc/system/local/inputs.conf [default] host = z1234 [splunktcp-ssl:9997] disabled = 0 connection_host = ip [SSL] serverCert = /opt/splunk/etc/auth/z1234_server.pem sslPassword = <password> requireClientCert = false sslVersions = tls1.2 We have just found additional input.conf on all indexers: in /opt/splunk/etc/apps/search/local/inputs.conf [splunktcp://9997] connection_host = ip We deleted this config on all indexers as this is no longer valid and shouldn't be active. Unfortunately, after splunk restart port 9997 is no longer opened on z1234 host. On other two hosts it is still opened...weird   Any idea what else to check/do to troubleshoot? Your help will be much appreciated! Greetings, Justyna
Hey all, I'm trying to find a way to bulk delete containers via the API in SOAR Cloud. Had an issue where Splunk created ~8000 containers  to one of my labels when testing, and no way am i goin... See more...
Hey all, I'm trying to find a way to bulk delete containers via the API in SOAR Cloud. Had an issue where Splunk created ~8000 containers  to one of my labels when testing, and no way am i going to sit here for an hour deleting them in the GUI. I've read this post: How-to-Delete-Multiple-container but that really only points me to the single requests.delete option - which is very slow. I can bulk update containers to change the status using requests.post and a list of container id's, but don't see any way to bulk delete. For context a for loop + requests.delete on each single container is actually slower than deleting them via the GUI. Am I missing it somewhere, or is this just not possible through API?