All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I would like to know if it is possible to deploy a single new app from the Deployer to a Search Head without losing all the apps already installed in the Search Head. Now the shcluster / ... See more...
Hi all, I would like to know if it is possible to deploy a single new app from the Deployer to a Search Head without losing all the apps already installed in the Search Head. Now the shcluster / apps directory is empty and I would like to know if I can just insert the new app and push it with the command "splunk apply shcluster-bundle" with some special arguments that allow me not to delete all the other apps. Thanks, Mauro 
I am using two macros in a search however, I want to use them in a way that IF they are broken or not available the search will not fail to complete?  
We have AV logs that send the detection and the block separately. I'm trying to create a query where I can take each incident_id (which has a log with a block, and a log with a detection) and have th... See more...
We have AV logs that send the detection and the block separately. I'm trying to create a query where I can take each incident_id (which has a log with a block, and a log with a detection) and have the query verify that both of logs are there. So it would table like incident_id ifDetectionFound ifBlockFound. I can't seem to wrap my mind around how to have it search the logs again for a block. Right now I'm just indexing for action=block OR action=Detection, but it still requires me to compare incident_ids (which are quite long random bits of numbers)
I have the below search results that will consist of 2 different types of log formats or strings. Log 1:  "MESSAGE "(?<JSON>\{.*\})" and Log 2 : "Published Event for txn_id (?<tx_id>\w+)". Both of th... See more...
I have the below search results that will consist of 2 different types of log formats or strings. Log 1:  "MESSAGE "(?<JSON>\{.*\})" and Log 2 : "Published Event for txn_id (?<tx_id>\w+)". Both of these formats or logs or messages will be present in the result of the below search_results. I want to filter only those logs with Log 1 format that has the same transactionid as the one in the other Log 2 format. I am trying to run the below query. However its giving zero results even though there are common transactionids between these 2 log formats. Is there any way to achieve this?    {search_results} | rex field=MESSAGE "(?<JSON>\{.*\})" | rex field=MESSAGE "Published Event for txn_id (?<tx_id>\w+)" | spath input=JSON | where transaction_id == tx_id  
Facing issues ingesting required  certificates data using certificate transparency addon. Add-on used : https://splunkbase.splunk.com/app/4006/#/overview Version: 1.3.1 We have added inputs whe... See more...
Facing issues ingesting required  certificates data using certificate transparency addon. Add-on used : https://splunkbase.splunk.com/app/4006/#/overview Version: 1.3.1 We have added inputs where we see that our certificate logs would be there but it seems the addon is not capturing all the logs from that input. We do not see any errors as well.    Query that we are using to check for our company related certificates is : index=certificate sourcetype=ct:log LogEntryType=0 LeafCertificate.x509_extensions.subjectAltName{}=*company_domain_name* | dedup LeafCertificate.serial | table LeafCertificate.x509_extensions.subjectAltName{}, LeafCertificate.validity.notbefore, LeafCertificate.validity.notafter source| rename LeafCertificate.x509_extensions.subjectAltName{} as LeafCertificate_Name LeafCertificate.validity.notbefore as Valid_From LeafCertificate.validity.notafter as Valid_Till source as Source    
Hello Splunkers, I have a quick question, is there a Splunk command to list all receiving port enable on a specific instance ?  I know you can check that from the GUI under "Settings/Forwarding ... See more...
Hello Splunkers, I have a quick question, is there a Splunk command to list all receiving port enable on a specific instance ?  I know you can check that from the GUI under "Settings/Forwarding and receiving/Configure Receiving". There is also a CLI to a add a receiving port :      splunk enable listen <port> -auth <username>:<password>     Info I got here : https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/Enableareceiver So I am wondering if there is a command to list all port enable on a machine.   Thanks a lot, GaetanVP  
Hi Team, I need to set up an alert for node-level transaction volume. Please suggest a good method. Thanks Revised for grammar and clarity Claudia Landivar, Community Manager & Editor
Hello, I have to index a log file in linux server in to one index but need to have two different sourcetype. Is it possible?? I tried but when compare  index = audit_idx sourcetype = linux_audit a... See more...
Hello, I have to index a log file in linux server in to one index but need to have two different sourcetype. Is it possible?? I tried but when compare  index = audit_idx sourcetype = linux_audit and index =audit_idx sourcetype = linux_audit_mll , results are not same there are few logs missing in each. Want to know why its happening. Thanks in advance..  
Hi, Im dummy in Splunk and I have one doubt. Maybe you can help me. I want to insert in an index that I have created some data that I have obtained when executing a script in python, so the res... See more...
Hi, Im dummy in Splunk and I have one doubt. Maybe you can help me. I want to insert in an index that I have created some data that I have obtained when executing a script in python, so the result of the script is the following:     sourcetype="script_emails" mail_sender="jordi@jordilazo.com" mail_recipient="jordilazo2@jordilazo.es" mail_date="10-10-2022" mail_subject="RE: NMXWZFOG< >VSTI" mail_reviewcomment="Comment:ÑC<AZR=@P"&"\A"     How do I configure the inputs, props and transform so that it is uploaded correctly in Splunk? - Field - Value - Source - Sourcetype I have this:   inputs.conf     [script://"script.py"] disabled = 0 index = python_emails interval = 22 13 * * * source = ????(I dont know what to insert here) sourcetype = mytest       transform.conf     [test_sourcetype] REGEX = sourcetype="(\w+)" FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype [test_comment] REGEX = mail_reviewcomment="(.+)" FORMAT = mail_reviewcomment::$1 WRITE_META = true       props.conf     [mytest] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true TIME_PREFIX = timestamp= MAX_TIMESTAMP_LOOKAHEAD = 10 CHARSET = UTF-8 KV_MODE = auto TRANSFORMS-test_sourcetype = test_sourcetype,test_comment     Thanks for you help!
Hello, We keep getting the errors from one of our indexers (there are 3 in the cluster, only one is affected): ERROR TcpInputConfig [60483 TcpListener] - SSL context not found. Will not open splun... See more...
Hello, We keep getting the errors from one of our indexers (there are 3 in the cluster, only one is affected): ERROR TcpInputConfig [60483 TcpListener] - SSL context not found. Will not open splunk to splunk (SSL) IPv4 port 9997 All indexers have the same SSL config: /opt/splunk/etc/system/local/inputs.conf [default] host = z1234 [splunktcp-ssl:9997] disabled = 0 connection_host = ip [SSL] serverCert = /opt/splunk/etc/auth/z1234_server.pem sslPassword = <password> requireClientCert = false sslVersions = tls1.2 We have just found additional input.conf on all indexers: in /opt/splunk/etc/apps/search/local/inputs.conf [splunktcp://9997] connection_host = ip We deleted this config on all indexers as this is no longer valid and shouldn't be active. Unfortunately, after splunk restart port 9997 is no longer opened on z1234 host. On other two hosts it is still opened...weird   Any idea what else to check/do to troubleshoot? Your help will be much appreciated! Greetings, Justyna
Hey all, I'm trying to find a way to bulk delete containers via the API in SOAR Cloud. Had an issue where Splunk created ~8000 containers  to one of my labels when testing, and no way am i goin... See more...
Hey all, I'm trying to find a way to bulk delete containers via the API in SOAR Cloud. Had an issue where Splunk created ~8000 containers  to one of my labels when testing, and no way am i going to sit here for an hour deleting them in the GUI. I've read this post: How-to-Delete-Multiple-container but that really only points me to the single requests.delete option - which is very slow. I can bulk update containers to change the status using requests.post and a list of container id's, but don't see any way to bulk delete. For context a for loop + requests.delete on each single container is actually slower than deleting them via the GUI. Am I missing it somewhere, or is this just not possible through API?
Hai all, Need help on to extract as new filed for user named after CORP\ Message=Task Scheduler started "{B9F5A32A-A340-49C1-B620-8C7A439CA849}" instance of the "\Microsoft\Office\OfficeTelemetryAg... See more...
Hai all, Need help on to extract as new filed for user named after CORP\ Message=Task Scheduler started "{B9F5A32A-A340-49C1-B620-8C7A439CA849}" instance of the "\Microsoft\Office\OfficeTelemetryAgentFallBack" task for user "CORP\s-ks4"   Thanks  
Hello, i'm currently ingesting XML and non-xml windows event logs, i wanna know the impact if i disable the render xml in my UF, i wanna know also if i'am getting duplicated raw event if im ren... See more...
Hello, i'm currently ingesting XML and non-xml windows event logs, i wanna know the impact if i disable the render xml in my UF, i wanna know also if i'am getting duplicated raw event if im rendering xml data (non-xml and xml are being ingested together ? ) if the answer is yes and if i disable the xml format do i get some data loss ? (because i get some event id that being ingested only in xml format).      
Hi Community, I need support to know how I can get the non-existent values from the two fields obtained from the "appendcols" command output. Example of Splunk output in table format below: 1st_Fi... See more...
Hi Community, I need support to know how I can get the non-existent values from the two fields obtained from the "appendcols" command output. Example of Splunk output in table format below: 1st_Field  2nd_Field 1111          2222 empty        3333 empty         1111 I am able to get 1111 after using the lookup command but I want to get 2222 and 3333 only as those are not present in 1st Field.
Hi Team,   One of the important dashboard has been deleted by an user in our Search Head and our SH  is hosted in Cloud and managed by Splunk Support. So is there any possibility to restore the d... See more...
Hi Team,   One of the important dashboard has been deleted by an user in our Search Head and our SH  is hosted in Cloud and managed by Splunk Support. So is there any possibility to restore the deleted dashboard in the Search head.         
Hello, When the cs_uri field is not present in the log, the url field is evaluated from cs_uri_scheme, cs_host, cs_uri_path  and cs_uri_query.  But it does not take in account the cs_uri_port in ... See more...
Hello, When the cs_uri field is not present in the log, the url field is evaluated from cs_uri_scheme, cs_host, cs_uri_path  and cs_uri_query.  But it does not take in account the cs_uri_port in case the url use a non standard port. For instance, if the real URL is http://somesite:8080/foo/bar, the TA will compute the url field as http://somesite/foo/bar. To solve this for the most common protocols (http, https with and w/o interception, ftp & rtsp), the line  EVAL-url = coalesce(cs_uri, if(isnull(cs_uri_scheme) OR (cs_uri_scheme=="-"), "", cs_uri_scheme+"://") + cs_host + cs_uri_path + if(isnull(cs_uri_query) OR (cs_uri_query == "-"), "", cs_uri_query)) should be replaced by  EVAL-url = coalesce(cs_uri, if(isnull(cs_uri_scheme) OR (cs_uri_scheme=="-"), "", cs_uri_scheme+"://") + cs_host + if((cs_uri_scheme=="http" AND cs_uri_port!=80) OR (cs_uri_scheme IN ("https","ssl") AND cs_uri_port!=443) OR (cs_uri_scheme="tcp" AND cs_method="CONNECT" AND cs_uri_port!="443") OR (cs_uri_scheme="ftp" AND cs_uri_port!=21) OR (cs_uri_scheme=="rtsp" AND cs_uri_port!=554),":".cs_uri_port,"") + cs_uri_path + if(isnull(cs_uri_query) OR (cs_uri_query == "-"), "", cs_uri_query))
Hi, I need to increase the size of text box filters in my dashboard studio? I need to be able to increase size of all or select textbox filter. I can't find articles about dashboard studio. Can some... See more...
Hi, I need to increase the size of text box filters in my dashboard studio? I need to be able to increase size of all or select textbox filter. I can't find articles about dashboard studio. Can someone assist pls? Thanks!
Hi, Since the Splunk cloud upgrade to 9.0.2208.1, we have noticed that our monitoring dashboards now have 90 second gaps on some of the search panels. Previously we would see a constant feed of dat... See more...
Hi, Since the Splunk cloud upgrade to 9.0.2208.1, we have noticed that our monitoring dashboards now have 90 second gaps on some of the search panels. Previously we would see a constant feed of data without gaps. Now we are experiencing intermittent lines with 90 second time gaps. we have tried tweaking the data model summarization period from the original 2 minutes to a large period and also a shorter 1 minute period. However, this does not prevent the 90 second gap in monitoring. Any suggestions on how to rectify this would be really appreciated. Thanks  
Hello, I have a rest query with a field that contain date and time Is it possible to limit the search by this field so it will search for the last 15 minutes ?   thanks
I have an issue where the logs aren't ingested regularly. The log file updates every 5 minutes with the same line entries, and will roll over to a new file end of day. -rw-r--r--+ 1 novlua novlua... See more...
I have an issue where the logs aren't ingested regularly. The log file updates every 5 minutes with the same line entries, and will roll over to a new file end of day. -rw-r--r--+ 1 novlua novlua 160416 Sep 18 23:55 iga_check_2022-09-18.log -rw-r--r--+ 1 novlua novlua 197664 Sep 19 23:55 iga_check_2022-09-19.log -rw-r--r--+ 1 novlua novlua 241056 Sep 20 23:55 iga_check_2022-09-20.log -rw-r--r--+ 1 novlua novlua 241056 Sep 21 23:55 iga_check_2022-09-21.log -rw-r--r--+ 1 novlua novlua 241056 Sep 22 23:55 iga_check_2022-09-22.log -rw-r--r--+ 1 novlua novlua 271783 Sep 23 23:55 iga_check_2022-09-23.log -rw-r--r--+ 1 novlua novlua 326880 Sep 24 23:55 iga_check_2022-09-24.log -rw-r--r--+ 1 novlua novlua 326880 Sep 25 23:55 iga_check_2022-09-25.log -rw-r--r--+ 1 novlua novlua 124783 Sep 26 09:06 iga_check_2022-09-26a.log -rw-r--r--+ 1 novlua novlua 271376 Sep 26 23:55 iga_check_2022-09-26.log -rw-r--r--+ 1 novlua novlua 248613 Sep 27 23:55 iga_check_2022-09-27.log -rw-r--r--+ 1 novlua novlua 97092 Sep 28 09:35 iga_check_2022-09-28.log Log file entries example: 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/changeset_*.* 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/queue/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/work/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/completed/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/changeset_*.* 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/queue/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/completed/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/changeset_*.* 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/queue/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/completed/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/changeset_*.* 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/queue/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/work/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/completed/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/changeset_*.* 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/queue/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/completed/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/changeset_*.* 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/queue/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/completed/*.csv I noted when requesting a forced entry, it gets picked up. inputs.conf [monitor:///opt/netiq/idm/apps/tomcat/fulfillment/logs/*.log] # blacklist = (\.gz) whitelist = \.log$|\.txt$ # crcSalt = <SOURCE> # disabled = false index = IG_RequestLog sourcetype = IG:RequestLogCheck time_before_close = 10