All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to do what i believe would be a correlated subquery. I need to search a file for a value, then re-search that same file for everything related to that value. In a log file of all... See more...
Hello, I am trying to do what i believe would be a correlated subquery. I need to search a file for a value, then re-search that same file for everything related to that value. In a log file of all items and the messages produced as they are processed, I need to search for specific failure messages, grab the item that failed and re-search the file for all messages related to that item.  What I currently have: source="logs" host="test"     [        search source="logs" host="test" ("failed to subtract" OR "failed to add")        |  rex "^[(?<item>[\w.-]+)\].+"           |  dedup item        |  fields + item    ]  |  rex "^[(?<item>[\w.-]+)\]\s(?<message>.+)" |  table _time, item, message   The inner [search] gives results on its own, but when placed as a subsearch, the whole provides no results. Any help would be appreciated!
There has been some interest at our organization re: setting up the Splunk forwarders on Openstack nodes, is Splunk able to ingest the cloud metrics from the Openstack hypervisors? We have had good... See more...
There has been some interest at our organization re: setting up the Splunk forwarders on Openstack nodes, is Splunk able to ingest the cloud metrics from the Openstack hypervisors? We have had good luck in setting open telemetry for Kubernetes but wondering if there is something similar for Openstack. Thanks   
Hello, SPLUNK used to get data through HEC using 8088 port.  But when we moved it to a new HF with new token it's now stopped getting data under new setting. Nothing has changed except the HF/serve... See more...
Hello, SPLUNK used to get data through HEC using 8088 port.  But when we moved it to a new HF with new token it's now stopped getting data under new setting. Nothing has changed except the HF/server/token used. Any recommendation will be highly appreciated. Thank you so much. 
Hello, the search I am using is below: Before trying to chart I got 10s of thousands of results, but I would like to create a chart that only displays the below information: "EventCode, EventType, s... See more...
Hello, the search I am using is below: Before trying to chart I got 10s of thousands of results, but I would like to create a chart that only displays the below information: "EventCode, EventType, subject, ComputerName, dest, process_exec, process_id" Why does my original search work but when I try to create chart it doesn't? Everything is done on Windows so Event Code/Types are Windows.  Is anyone able to fix my search so it will pull only the data after chart as well as chart it. Thank you!   (Insert Host Name) user="Insert User Name" | chart EventCode, EventType, subject, ComputerName, dest, process_exec, process_id
I have a simple dashboard reporting on file transfers.  There is one column I want color coded based on return code.  Ideally I could have either return code = "0" is green, return code != "0" is red... See more...
I have a simple dashboard reporting on file transfers.  There is one column I want color coded based on return code.  Ideally I could have either return code = "0" is green, return code != "0" is red What I currently have covers the "0" just fine, but doesn't cover non"0" results. <format type="color" field="PPA1ReturnCode"> <colorPalette type="map">{"0":#65A637,"!=0":#D93F3C,"Failure":#D93F3C}</colorPalette> </format> I also tried "NOT 0", and that didn't work either.
Is it possible to do this query with out using transaction  index="prod" source="mysource" | transaction startswith="create happening for test" endswith=("create done " OR "create not done " )|sta... See more...
Is it possible to do this query with out using transaction  index="prod" source="mysource" | transaction startswith="create happening for test" endswith=("create done " OR "create not done " )|stats count
Hi Team, As per the document https://github.com/splunk/splunk-operator/blob/develop/docs/README.md, we deploy 2 pods which are the Splunk operator and Splunk Standalone. The Splunk operator is depl... See more...
Hi Team, As per the document https://github.com/splunk/splunk-operator/blob/develop/docs/README.md, we deploy 2 pods which are the Splunk operator and Splunk Standalone. The Splunk operator is deploying and successfully running. The Splunk standalone is giving the issue and is not able to deploy the pod. the error is: Back-off restarting the failed container Readiness probe failed The pod boots halfway and it fails Please find the logs below: STDERR: homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue MSG: non-zero return code PLAY RECAP ********************************************************************* localhost : ok=48 changed=0 unreachable=0 failed=1 skipped=51 rescued=0 ignored=0 Wednesday 08 June 2022 00:30:45 +0000 (0:02:16.140) 0:03:19.038 ******** =============================================================================== splunk_common : Start Splunk via CLI ---------------------------------- 136.14s splunk_common : Get Splunk status -------------------------------------- 44.09s splunk_common : Apply admin password ------------------------------------ 4.02s Gathering Facts --------------------------------------------------------- 1.28s splunk_common : Cleanup Splunk runtime files ---------------------------- 0.96s splunk_common : Update /opt/splunk/etc ---------------------------------- 0.65s splunk_common : Create .ui_login ---------------------------------------- 0.63s splunk_common : Find manifests ------------------------------------------ 0.61s splunk_common : Check for scloud ---------------------------------------- 0.61s splunk_common : Set general pass4SymmKey -------------------------------- 0.58s splunk_common : Get Splunk status --------------------------------------- 0.52s splunk_common : Enable Web SSL ------------------------------------------ 0.50s splunk_common : Enable Splunkd SSL -------------------------------------- 0.49s splunk_common : Check if /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key exists --- 0.49s splunk_common : Reset root CA ------------------------------------------- 0.49s splunk_common : Remove input SSL settings ------------------------------- 0.48s splunk_common : Enable splunktcp input ---------------------------------- 0.47s splunk_common : Trigger restart ----------------------------------------- 0.46s splunk_common : Check for existing installation ------------------------- 0.46s splunk_common : Check if /sbin/updateetc.sh exists ---------------------- 0.46s [root@e2c-master ~]#
Hi  I have two fields: target (server1, server2,…) , status count by (ok,nokey) how can i show these fields on timechart? (I mean overlay chart) stack bar chart show count of status by ta... See more...
Hi  I have two fields: target (server1, server2,…) , status count by (ok,nokey) how can i show these fields on timechart? (I mean overlay chart) stack bar chart show count of status by target? any idea?  Thanks 
Hello, I am trying to establish connectivity between AWS Kinesis Firehose and a Splunk HF using version 6.0.0 of the Splunk Add-on for AWS, and I am having trouble configuring the CA-signed certific... See more...
Hello, I am trying to establish connectivity between AWS Kinesis Firehose and a Splunk HF using version 6.0.0 of the Splunk Add-on for AWS, and I am having trouble configuring the CA-signed certificates. I am following this documentation, and since my HF is within AWS private cloud I am following this section that has the prerequisite for "the HEC endpoint to be terminated with a valid CA-signed SSL certificate". I have a valid CA-signed SSL certificate but I am unsure about where I need to install it. So far I have updated server/local/web.conf with the certificates so that web UI is secure. Do I need to make any additional adjustments on the HF concerning the certificates? For example, do I need to update inputs.conf in any way to secure HTTP communication? Any help is greatly appreciated! Thank you and best regards, Andrew
Hello! I have learned so much from this community over the years but there is one query I am trying to write that I cannot figure out. I have a number of logs each containing four fields, each of t... See more...
Hello! I have learned so much from this community over the years but there is one query I am trying to write that I cannot figure out. I have a number of logs each containing four fields, each of those fields have a unique set of a few values. I am trying to do a count for each unique value and put it in a three column table including the field name, value, and count. I know I can hard-code all the values to give them a category/field name but as these values change over time I would rather not have to do that if possible. Log examples     key exchange algo: dh-group-exchange-sha256, public key algo: ssh-dss, cipher algo: aes128-cbc, mac algo: sha256 key exchange algo: ecdh-sha2-nistp256, public key algo: ssh-rsa, cipher algo: aes256-ctr, mac algo: sha256     Desired result: field cipher count keyExchange dh-group-exchange-sha256 ## keyExchange ecdh-sha2-nistp256 ## publicKey ssh-dss ## publicKey ssh-rsa ##   etc. Is there a way to do this besides hard-coding a field for each cipher? For reference, here is how I am pulling the two column list of cipher | count without the field name:     base search | eval cipher=keyExchange.";".publicKey | makemv delim=";" cipher | stats count by cipher      This also works for two columns but appears to be a bit slower     | eval cipher = mvappend(keyExchange,publicKey) | mvexpand cipher | stats count by cipher     Thanks!
Hello We are running Enterprise 8.2.6 (Windows Server).  We use a product called Fastvue Syslog Server on another Windows Server as a central Syslog server.   Fastvue Syslog writes out the syslog... See more...
Hello We are running Enterprise 8.2.6 (Windows Server).  We use a product called Fastvue Syslog Server on another Windows Server as a central Syslog server.   Fastvue Syslog writes out the syslogs into folders such as: D:\Logs\Syslog\Logs\switch\x.x.x.x\x.x.x.x-YYYY-MM-DD.log D:\Logs\Syslog\Logs\esx\x.x.x.x\x.x.x.x-YYYY-MM-DD.log (where x.x.x.x is the syslog client ip address) The Syslog Server has the Splunk Universal Forwarder installed as is configured to for output Windows Event Logs. The inputs.conf file has the following added in addition to the eventlogs: [monitor://D:\Logs\Syslog\Logs\switch\*] sourcetype = syslog-switch disabled = false [monitor://D:\Logs\Syslog\Logs\esx\*] sourcetype = syslog-esx disabled = false On the Splunk Indexer, we can see event logs from the Windows Server, but we are not seeing any syslog message from the logged files? Am I missing something? Thanks in advance.  
Hello All, i have checked the  URLs in user experience ( pages & AJAX requests ) there is alot of urls don't have requests  ( 0 requests ) and we delete them manually. So, I thought of  some enhance... See more...
Hello All, i have checked the  URLs in user experience ( pages & AJAX requests ) there is alot of urls don't have requests  ( 0 requests ) and we delete them manually. So, I thought of  some enhancement. However, I want to know if we automated deleting the URLs which having 0 requests 1) why do we have URLs with 0 requests?  2) Can we automate the delete activity? If yes, what is the improvement in the tool in automating this step? 3) What is the consequences from this step?  Thanks in advance  Omneya                                                                                                              
Hi,   I'm trying to generate a report with the following information -Total Bandwidth for each user -List of top 3 (Bandwidth usage) URLs for each user -Bandwidth for each URL For example... See more...
Hi,   I'm trying to generate a report with the following information -Total Bandwidth for each user -List of top 3 (Bandwidth usage) URLs for each user -Bandwidth for each URL For example   Thank you!
Looking to brush off the cobwebs of my Splunk use and wanted to find a simple query of server activity/traffic for a server on our domain.  If anyone has a basic query they use on a regular basis to ... See more...
Looking to brush off the cobwebs of my Splunk use and wanted to find a simple query of server activity/traffic for a server on our domain.  If anyone has a basic query they use on a regular basis to see traffic on their servers, I'd appreciate if you could share it, once I get the basic syntax, I can take it from there.
Has anyone created a data visualization add-on or app for stock analysis - I have searched splunkbase extensively... I want to display open high low close data for stock tracking using a candlestick ... See more...
Has anyone created a data visualization add-on or app for stock analysis - I have searched splunkbase extensively... I want to display open high low close data for stock tracking using a candlestick view model but can't really find an existing visualization that is able to display stock data in a candlestick view? any thoughs or suggestions?? thankyou
I was going through the tutorial to build "your first app" on the Splunk Development site here, and I could not get the api call to create an index.   Running on a windows 10 Development box (tri... See more...
I was going through the tutorial to build "your first app" on the Splunk Development site here, and I could not get the api call to create an index.   Running on a windows 10 Development box (trial license). Splunk Enterprise Version:8.2.6 Build:a6fe1ee8894b   The command below fails and I am not sure why.  I can use one of the other two options (CLI or WebUI) to create the index, but wanted to know why the REST API option failed.   C:\apps\splunk\bin>curl -k -u "user":"password" https://localhost:8089/servicesNS/admin/search/data/indexes -d name="devtutorial" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Action forbidden.</msg> </messages> </response>   Apologies for the formatting, but when I tried to insert it as code, it said it was invalid. I have included an image version below. Thank you.  
Hello Splunkers,   After my own unsuccessful researches, I thought you may have the answer.  So, I'm wondering if there is a way to make the thruput variable. Indeed,  my search peer may have... See more...
Hello Splunkers,   After my own unsuccessful researches, I thought you may have the answer.  So, I'm wondering if there is a way to make the thruput variable. Indeed,  my search peer may have a too large amount of data to index at a time due to a network issue, and I would like to spread out the indexing during the night for example. So is there a way to set a throughput ([thruput]) limit when my server is the most asked and unset this limit when it is less used?   Thanks in advance for your time and your answer! Regards, Antoine 
Hi everyone, i want to use the below command in a single line. i have tried "comma" but it's not working. How do i do it? |eval comments= if(Action="create","something has been created",'commen... See more...
Hi everyone, i want to use the below command in a single line. i have tried "comma" but it's not working. How do i do it? |eval comments= if(Action="create","something has been created",'comments') |eval comments= if(Action="delete","something  has been deleted",'comments') Thanks.
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other user... See more...
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other users like I can do with another Splunk instance where I have the role Power (With Power role, I can share it no problem). I don't want to assign myself the Power role since it's broad and wouldn't follow the rule of least privilege. For this reason which permission would I need to assign myself in order to be able to share my data extraction with other users?
Hi,   I'm wondering if there isn't an issue with the correlation search that comes with Splunk ES "Threat activity detected".  Indeed, my problem come from the fact that when it's triggered then I... See more...
Hi,   I'm wondering if there isn't an issue with the correlation search that comes with Splunk ES "Threat activity detected".  Indeed, my problem come from the fact that when it's triggered then I have at least 2 other alerts concerning the "24h thresold risk score" (RBA).    I have taken the original correlation search (at least I think it is)  | from datamodel:"Threat_Intelligence"."Threat_Activity" | dedup threat_match_field,threat_match_value | `get_event_id` | table _raw,event_id,source,src,dest,src_user,user,threat*,weight | rename weight as record_weight | `per_panel_filter("ppf_threat_activity","threat_match_field,threat_match_value")` | `get_threat_attribution(threat_key)` | rename source_* as threat_source_*,description as threat_description | fields - *time | eval risk_score=case(isnum(record_weight), record_weight, isnum(weight) AND weight=1, 60, isnum(weight), weight, 1=1, null()), risk_system=if(threat_match_field IN("query", "answer"),threat_match_value,null()), risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), risk_network=if(threat_match_field IN("http_user_agent", "url") OR threat_match_field LIKE "certificate_%",null(),threat_match_value), risk_host=if(threat_match_field IN("file_name", "process", "service") OR threat_match_field LIKE "registry_%",null(),threat_match_value), risk_other=if(threat_match_field IN("query", "answer", "src", "dest", "src_user", "user", "file_hash", "http_user_agent", "url", "file_name", "process", "service") OR threat_match_field LIKE "certificate_%" OR threat_match_field LIKE "registry_%",null(),threat_match_value)  And notice that the mechanism to select which type of risk category is concerned is changing after the first line.    1.  Risk_system  risk_system=if(threat_match_field IN("query", "answer"),threat_match_value,null()), If I translate : If the threat_match_field is "query or "answer" then the risk category is system and risk_system="IOC that matched" In this case this is a domain or URL (because it's a DNS query or answer) --> THIS LINE IS GOOD 2. Risk_hash risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), But in the case of hash, if I translate : If the threat_match_field is "file_hash" then the risk category is NOT hash and risk_hash="null" --> THIS LINE IS WRONG Then it is the same for all other category : network, host, other   So in my opinion the values in the if statement were reversed.  risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), shoud be  risk_hash=if(threat_match_field IN("file_hash"),threat_match_value, null()),   Is it me ? My instance ? or what ? Thanks in advance Xavier