All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have logs like below in splunk. log1: "count":1, log2: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM3 log3: "count":1, log4: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.API... See more...
Hi All, I have logs like below in splunk. log1: "count":1, log2: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM3 log3: "count":1, log4: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM2 log5: "count":1, log6: gcg.gom.esb_159515.rg.APIMediation.Disp1.3.Rs.APIM1 I used the below query to create a table showing the "Queue" and the "Consumer count": ***** | rex field=_raw "Rs\.(?P<Queue>\w+)" | rex field=_raw "count\"\:(?P<Consumer_Count>\d+)\," | table Queue,Consumer_Count But this query gives the table in the below manner: Queue Consumer_Count   1 APIM3     1 APIM2     1 APIM1   I want the rows to be combined in the below manner: Queue Consumer_Count APIM3 1 APIM2 1 APIM1 1   Please help to modify the query to get the desired output. Thank you..!!
Hi Team, i have observed a strange behavior. Actually we had 'Splunk add-on for AWS' installed in IDM cloud node and indexers. Recently we asked Splunk cloud team to get the add-on installed on Sear... See more...
Hi Team, i have observed a strange behavior. Actually we had 'Splunk add-on for AWS' installed in IDM cloud node and indexers. Recently we asked Splunk cloud team to get the add-on installed on Search Heads as well. Post installation on SHs, we didn't get any alert/notable triggered for AWS which was there. We simply disable that add-on on SH and it started working. What could be the possible issue?   Thanks  
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd=... See more...
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>" and I would like to remove below xml element with attribute from data fields , How can I do that ? <InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> Results I want is  2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>"
Hi all, I added a new monitor for a log file in inputs.conf and there were no errors in splunkd.log. However, it is not being ingested in Splunk, while it worked for other servers. May I know what... See more...
Hi all, I added a new monitor for a log file in inputs.conf and there were no errors in splunkd.log. However, it is not being ingested in Splunk, while it worked for other servers. May I know what configuration settings to check/compare between the problematic server and the working servers?   Regards, Zijian
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd="http://w... See more...
I have the event that looks like below    2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>" and I would like to remove below xml element with attribute from data fields , How can I do that ? <InfoNox_Interface xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> Results I want is  2022-06-15 19:59:57.489 threadId=L4GFP2275S1K class="ActiveSession" mname="NA" callId="NA" eventType="InMsg" data="<TestRQ><Merchant_ID>testmid</Merchant_ID></TestRQ>" @ITWhisperer 
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But see... See more...
Current one that is working is: [fschange:F:\bau\box\quest] Need to specify it to: [fschange:F:\bau\box\quest\...\arch] Where quest has 5 folders, inside of each has a folder \arch But seems not working using \ ... \ or \*\ It is a forwarder and already restarted it as well.
I am trying to pull up the Risk Event Timeline for a Risk Notable in my Incident Review Dashboard.   Every time I click the link, it gives me an error saying "Risk event has missing or invalid fields... See more...
I am trying to pull up the Risk Event Timeline for a Risk Notable in my Incident Review Dashboard.   Every time I click the link, it gives me an error saying "Risk event has missing or invalid fields".   I know that Risk Event Timeline only works for the risk_object field on Risk Notables. We have noticed a couple of issues that were related to Search-Driven lookups being disabled.  Might there be a lookup table that is referenced here that might be in the same boat? Is there somewhere that defines what fields are required in the Risk Notable? Any way to troubleshoot what is missing or incorrect?
Is there an option to drop older events from the pipeline? Older events can cause frequent bucket rolling and most likely not useful.
How can I write the following to get past the join limitation?     index=aws eventName=TerminateInstances | Rename "requestParameters.instancesSet.items{}.instanceId" AS vm_id | join vm_id type... See more...
How can I write the following to get past the join limitation?     index=aws eventName=TerminateInstances | Rename "requestParameters.instancesSet.items{}.instanceId" AS vm_id | join vm_id type=left max=0 [ search index=aws source="us-west-1:ec2_instances" sourcetype="aws:description" ] | dedup vm_id | table _time, action, vm_id, tags.Name, userName      
Hello, I am trying to do what i believe would be a correlated subquery. I need to search a file for a value, then re-search that same file for everything related to that value. In a log file of all... See more...
Hello, I am trying to do what i believe would be a correlated subquery. I need to search a file for a value, then re-search that same file for everything related to that value. In a log file of all items and the messages produced as they are processed, I need to search for specific failure messages, grab the item that failed and re-search the file for all messages related to that item.  What I currently have: source="logs" host="test"     [        search source="logs" host="test" ("failed to subtract" OR "failed to add")        |  rex "^[(?<item>[\w.-]+)\].+"           |  dedup item        |  fields + item    ]  |  rex "^[(?<item>[\w.-]+)\]\s(?<message>.+)" |  table _time, item, message   The inner [search] gives results on its own, but when placed as a subsearch, the whole provides no results. Any help would be appreciated!
There has been some interest at our organization re: setting up the Splunk forwarders on Openstack nodes, is Splunk able to ingest the cloud metrics from the Openstack hypervisors? We have had good... See more...
There has been some interest at our organization re: setting up the Splunk forwarders on Openstack nodes, is Splunk able to ingest the cloud metrics from the Openstack hypervisors? We have had good luck in setting open telemetry for Kubernetes but wondering if there is something similar for Openstack. Thanks   
Hello, SPLUNK used to get data through HEC using 8088 port.  But when we moved it to a new HF with new token it's now stopped getting data under new setting. Nothing has changed except the HF/serve... See more...
Hello, SPLUNK used to get data through HEC using 8088 port.  But when we moved it to a new HF with new token it's now stopped getting data under new setting. Nothing has changed except the HF/server/token used. Any recommendation will be highly appreciated. Thank you so much. 
Hello, the search I am using is below: Before trying to chart I got 10s of thousands of results, but I would like to create a chart that only displays the below information: "EventCode, EventType, s... See more...
Hello, the search I am using is below: Before trying to chart I got 10s of thousands of results, but I would like to create a chart that only displays the below information: "EventCode, EventType, subject, ComputerName, dest, process_exec, process_id" Why does my original search work but when I try to create chart it doesn't? Everything is done on Windows so Event Code/Types are Windows.  Is anyone able to fix my search so it will pull only the data after chart as well as chart it. Thank you!   (Insert Host Name) user="Insert User Name" | chart EventCode, EventType, subject, ComputerName, dest, process_exec, process_id
I have a simple dashboard reporting on file transfers.  There is one column I want color coded based on return code.  Ideally I could have either return code = "0" is green, return code != "0" is red... See more...
I have a simple dashboard reporting on file transfers.  There is one column I want color coded based on return code.  Ideally I could have either return code = "0" is green, return code != "0" is red What I currently have covers the "0" just fine, but doesn't cover non"0" results. <format type="color" field="PPA1ReturnCode"> <colorPalette type="map">{"0":#65A637,"!=0":#D93F3C,"Failure":#D93F3C}</colorPalette> </format> I also tried "NOT 0", and that didn't work either.
Is it possible to do this query with out using transaction  index="prod" source="mysource" | transaction startswith="create happening for test" endswith=("create done " OR "create not done " )|sta... See more...
Is it possible to do this query with out using transaction  index="prod" source="mysource" | transaction startswith="create happening for test" endswith=("create done " OR "create not done " )|stats count
Hi Team, As per the document https://github.com/splunk/splunk-operator/blob/develop/docs/README.md, we deploy 2 pods which are the Splunk operator and Splunk Standalone. The Splunk operator is depl... See more...
Hi Team, As per the document https://github.com/splunk/splunk-operator/blob/develop/docs/README.md, we deploy 2 pods which are the Splunk operator and Splunk Standalone. The Splunk operator is deploying and successfully running. The Splunk standalone is giving the issue and is not able to deploy the pod. the error is: Back-off restarting the failed container Readiness probe failed The pod boots halfway and it fails Please find the logs below: STDERR: homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue MSG: non-zero return code PLAY RECAP ********************************************************************* localhost : ok=48 changed=0 unreachable=0 failed=1 skipped=51 rescued=0 ignored=0 Wednesday 08 June 2022 00:30:45 +0000 (0:02:16.140) 0:03:19.038 ******** =============================================================================== splunk_common : Start Splunk via CLI ---------------------------------- 136.14s splunk_common : Get Splunk status -------------------------------------- 44.09s splunk_common : Apply admin password ------------------------------------ 4.02s Gathering Facts --------------------------------------------------------- 1.28s splunk_common : Cleanup Splunk runtime files ---------------------------- 0.96s splunk_common : Update /opt/splunk/etc ---------------------------------- 0.65s splunk_common : Create .ui_login ---------------------------------------- 0.63s splunk_common : Find manifests ------------------------------------------ 0.61s splunk_common : Check for scloud ---------------------------------------- 0.61s splunk_common : Set general pass4SymmKey -------------------------------- 0.58s splunk_common : Get Splunk status --------------------------------------- 0.52s splunk_common : Enable Web SSL ------------------------------------------ 0.50s splunk_common : Enable Splunkd SSL -------------------------------------- 0.49s splunk_common : Check if /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key exists --- 0.49s splunk_common : Reset root CA ------------------------------------------- 0.49s splunk_common : Remove input SSL settings ------------------------------- 0.48s splunk_common : Enable splunktcp input ---------------------------------- 0.47s splunk_common : Trigger restart ----------------------------------------- 0.46s splunk_common : Check for existing installation ------------------------- 0.46s splunk_common : Check if /sbin/updateetc.sh exists ---------------------- 0.46s [root@e2c-master ~]#
Hi  I have two fields: target (server1, server2,…) , status count by (ok,nokey) how can i show these fields on timechart? (I mean overlay chart) stack bar chart show count of status by ta... See more...
Hi  I have two fields: target (server1, server2,…) , status count by (ok,nokey) how can i show these fields on timechart? (I mean overlay chart) stack bar chart show count of status by target? any idea?  Thanks 
Hello, I am trying to establish connectivity between AWS Kinesis Firehose and a Splunk HF using version 6.0.0 of the Splunk Add-on for AWS, and I am having trouble configuring the CA-signed certific... See more...
Hello, I am trying to establish connectivity between AWS Kinesis Firehose and a Splunk HF using version 6.0.0 of the Splunk Add-on for AWS, and I am having trouble configuring the CA-signed certificates. I am following this documentation, and since my HF is within AWS private cloud I am following this section that has the prerequisite for "the HEC endpoint to be terminated with a valid CA-signed SSL certificate". I have a valid CA-signed SSL certificate but I am unsure about where I need to install it. So far I have updated server/local/web.conf with the certificates so that web UI is secure. Do I need to make any additional adjustments on the HF concerning the certificates? For example, do I need to update inputs.conf in any way to secure HTTP communication? Any help is greatly appreciated! Thank you and best regards, Andrew
Hello! I have learned so much from this community over the years but there is one query I am trying to write that I cannot figure out. I have a number of logs each containing four fields, each of t... See more...
Hello! I have learned so much from this community over the years but there is one query I am trying to write that I cannot figure out. I have a number of logs each containing four fields, each of those fields have a unique set of a few values. I am trying to do a count for each unique value and put it in a three column table including the field name, value, and count. I know I can hard-code all the values to give them a category/field name but as these values change over time I would rather not have to do that if possible. Log examples     key exchange algo: dh-group-exchange-sha256, public key algo: ssh-dss, cipher algo: aes128-cbc, mac algo: sha256 key exchange algo: ecdh-sha2-nistp256, public key algo: ssh-rsa, cipher algo: aes256-ctr, mac algo: sha256     Desired result: field cipher count keyExchange dh-group-exchange-sha256 ## keyExchange ecdh-sha2-nistp256 ## publicKey ssh-dss ## publicKey ssh-rsa ##   etc. Is there a way to do this besides hard-coding a field for each cipher? For reference, here is how I am pulling the two column list of cipher | count without the field name:     base search | eval cipher=keyExchange.";".publicKey | makemv delim=";" cipher | stats count by cipher      This also works for two columns but appears to be a bit slower     | eval cipher = mvappend(keyExchange,publicKey) | mvexpand cipher | stats count by cipher     Thanks!
Hello We are running Enterprise 8.2.6 (Windows Server).  We use a product called Fastvue Syslog Server on another Windows Server as a central Syslog server.   Fastvue Syslog writes out the syslog... See more...
Hello We are running Enterprise 8.2.6 (Windows Server).  We use a product called Fastvue Syslog Server on another Windows Server as a central Syslog server.   Fastvue Syslog writes out the syslogs into folders such as: D:\Logs\Syslog\Logs\switch\x.x.x.x\x.x.x.x-YYYY-MM-DD.log D:\Logs\Syslog\Logs\esx\x.x.x.x\x.x.x.x-YYYY-MM-DD.log (where x.x.x.x is the syslog client ip address) The Syslog Server has the Splunk Universal Forwarder installed as is configured to for output Windows Event Logs. The inputs.conf file has the following added in addition to the eventlogs: [monitor://D:\Logs\Syslog\Logs\switch\*] sourcetype = syslog-switch disabled = false [monitor://D:\Logs\Syslog\Logs\esx\*] sourcetype = syslog-esx disabled = false On the Splunk Indexer, we can see event logs from the Windows Server, but we are not seeing any syslog message from the logged files? Am I missing something? Thanks in advance.