All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two rows having follwing values: Name Text Count A ABC 1 A EFG 1 I want that my result should be displayed in single row showing count as 2 and both the text for a common na... See more...
I have two rows having follwing values: Name Text Count A ABC 1 A EFG 1 I want that my result should be displayed in single row showing count as 2 and both the text for a common name = A. Is there a way we can acheive this
Hi folks , I wanted to change the color of each bar in a bar chart ,the field value of these bar charts are not legends . I see the coloring will work only for the legends or with some threshold v... See more...
Hi folks , I wanted to change the color of each bar in a bar chart ,the field value of these bar charts are not legends . I see the coloring will work only for the legends or with some threshold value. I have count as my legend for now. Please suggest how can i achieve this. Thanks in advance
I came across a blog post on running splunk as a container in AWS cloud - https://www.splunk.com/en_us/blog/cloud/running-splunk-in-amazon-s-elastic-container-service.html Are there any similar bl... See more...
I came across a blog post on running splunk as a container in AWS cloud - https://www.splunk.com/en_us/blog/cloud/running-splunk-in-amazon-s-elastic-container-service.html Are there any similar blog post for Azure cloud? I found a whitepaper that talks about Splunk on Azure but it is a virtual-machine based deployment. It would be great if someone can share any resources on deploying single-instance splunk enterprise as a container on Azure. Thanks!
Hi All, We have set this addon up following the documentation listed at https://www.cisco.com/c/en/us/td/docs/security/firepower/630/api/eStreamer_enCore/eStreamereNcoreSplunkOperationsGuide_35... See more...
Hi All, We have set this addon up following the documentation listed at https://www.cisco.com/c/en/us/td/docs/security/firepower/630/api/eStreamer_enCore/eStreamereNcoreSplunkOperationsGuide_354.html and we are seeing these logs which appear to start the estreamer listener/collector which tries one call then times out and stops. Logs in reverse order of time (apologies) status_id=0 status="Stopped" 2020-05-08 15:20:44,949 Controller INFO Goodbye 2020-05-08 15:20:44,949 Monitor INFO Stopping Monitor. 2020-05-08 15:20:44,949 Controller INFO Stopping... 2020-05-08 15:20:44,948 Controller ERROR TimeoutException: Connection read timeout\nTraceback (most recent call last):\n File "/data/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/controller.py", line 244, in start\n diagnostics.execute()\n File "/data/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 96, in execute\n response = connection.response()\n File "/data/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/connection.py", line 182, in response\n dataBuffer = self.__read( 8 )\n File "/data/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/connection.py", line 173, in __read\n raise estreamer.TimeoutException('Connection read timeout')\nTimeoutException: Connection read timeout\n 2020-05-08 15:20:42,947 Diagnostics INFO Receiving response message 2020-05-08 15:20:42,947 Diagnostics INFO Sending request message 2020-05-08 15:20:42,947 Diagnostics INFO Request message=000100***********061 2020-05-08 15:20:42,947 Diagnostics INFO Creating request message 2020-05-08 15:20:42,931 Connection INFO Using TLS v1.2 2020-05-08 15:20:42,931 Connection INFO Connecting to ******estreamer_host_redacted***.net:8302 2020-05-08 15:20:42,931 Diagnostics INFO Creating connection 2020-05-08 15:20:42,931 Diagnostics INFO Check certificate 2020-05-08 15:20:42,931 Controller INFO Settings: *************REDACTED*************** 2020-05-08 15:20:42,931 Controller INFO Processes: 4 2020-05-08 15:20:42,931 Controller INFO Sha256: 38ad0dced0f*****REDACTED******f95bb887692f56b 2020-05-08 15:20:42,931 Controller INFO Platform version: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core 2020-05-08 15:20:42,931 Controller INFO Starting client (pid=23050). 2020-05-08 15:20:42,925 Controller INFO eNcore version: 3.6.8 We have checked that the firewall is not blocking port 8302 on the host. The certificate entered on the setup screen appears to pass OK. We even tried increasing the timeouts in TA-estreamer\bin\encore\default.conf Any other tips of things we might be missing? I thought we could ask here first before digging too deeply into the Python script to see why it's failing. Thanks
Hi All, I am trying to extract the multiple response values for a single request. After using "spath" command, the result is shown below. Here, I didn't extract any values and just tabulated th... See more...
Hi All, I am trying to extract the multiple response values for a single request. After using "spath" command, the result is shown below. Here, I didn't extract any values and just tabulated the needed information. So, in the XML file, ALB city has 4 streets. Each street contains 4 members. This is single result . Now I want to break it as 16 results with all details as shown below. My query looks like index=ab host=xy source=*y.xml |spath |table _time host City_Name Street_name Person_code Person_role The above query gave the results shown in the first pic. Now i want to get the results as shown in the second picture. Please help me in framing the query for the latter part to get the expected results. Thank You.
Hello all, Two Nmon TAs on one AIX host. TA-nmon (based on events) showing CPU load higher than TA-metricator-for-nmon (based on metrics). I need help to figure out what might cause this. ... See more...
Hello all, Two Nmon TAs on one AIX host. TA-nmon (based on events) showing CPU load higher than TA-metricator-for-nmon (based on metrics). I need help to figure out what might cause this. SPL on data from TA-nmon (events): | tstats avg(CPU.UPTIME.load_average_1min) AS load_average_1min avg(CPU.UPTIME.load_average_5min) AS load_average_5min avg(CPU.UPTIME.load_average_15min) AS load_average_15min from datamodel=NMON_Data_CPU where (nodename = CPU.UPTIME) host= groupby _time host prestats=true `nmon_span` | timechart `nmon_span` avg(CPU.UPTIME.load_average_1min) AS load_average_1min avg(CPU.UPTIME.load_average_5min) AS load_average_5min avg(CPU.UPTIME.load_average_15min) AS load_average_15min SPL on data from TA-metricator-for-nmon (metrics): | mstats avg(_value) as value where `nmon_metrics_index` (metric_name=os.unix.nmon.system.uptime.load_average_1min OR metric_name=os.unix.nmon.system.uptime.load_average_5min OR metric_name=os.unix.nmon.system.uptime.load_average_15min) host= by metric_name `nmon_span` | `extract_metrics("load_average_1min load_average_5min load_average_15min")` | fillnull value=0 load_average_1min load_average_5min load_average_15min | timechart `nmon_span` avg(load_average_1min) as load_average_1min avg(load_average_5min) as load_average_5min avg(load_average_15min) as load_average_15min Update: Continuing my investigation. Source is output from "uptime" command which is exactly the same for both events and metrics TAs. From /opt/splunkforwarder/etc/apps/(TA-metricator-for-nmon OR TA-nmon)/bin/nmon_external_cmd/nmon_external_snap.sh # Uptime information (uptime command output) echo "UPTIME,$1,\"`uptime | sed 's/^\s//g' | sed 's/,/;/g'`\"" >>NMON_FIFO_PATH/nmon_external.dat & Turns out that data based on events is accurate, but metric values are exactly 3 times less (attached screenshot).
Hi, How to connect the postgresql data to DB connect? What are the details to be collected from the user? Thanks, Vijay Sri S
Hi All, I am trying to extract the multiple response values for a single request. After using "spath" command, the result is shown below. Here, I didn't extract any values and just tabulated th... See more...
Hi All, I am trying to extract the multiple response values for a single request. After using "spath" command, the result is shown below. Here, I didn't extract any values and just tabulated the needed information. So, in the XML file, ALB city has 4 streets. Each street contains 4 members. This is single result . Now I want to break it as 16 results with all details as shown below. My query looks like index=ab host=xy source=*y.xml |spath |table _time host City_Name Street_name Person_code Person_role The above query gave the results shown in the first pic. Now i want to get the results as shown in the second picture. Please help me in framing the query for the latter part to get the expected results. Thank You.
Hi , I want to delete few Automatic lookups from server as it doesnt give me option of deleting it from GUI. Even though I have admin access to the env - Can someone please guide
I have a query like : index=rxc sourcetype=rxcapp splunk_server_group="" status=2 [| inputlookup quote.csv | fields name | rename name as questname] I have a data which has field named questn... See more...
I have a query like : index=rxc sourcetype=rxcapp splunk_server_group="" status=2 [| inputlookup quote.csv | fields name | rename name as questname] I have a data which has field named questname which has all values that are there in name column of my csv and some other values too ....I want to use the same csv without manipulating it but want data in my query specific to name present in cvs and not for all questname and then work on it how to do that . the above does not seems to be working
I have some data in the following format which does not split correctly. The events get indexed as one event. sample data {"date": "5/8/2020", "time": "7:57:47 AM", "client": "187.45.18.205", ... See more...
I have some data in the following format which does not split correctly. The events get indexed as one event. sample data {"date": "5/8/2020", "time": "7:57:47 AM", "client": "187.45.18.205", "flags": "A", "query": "v1.addthisedge.com"}{"date": "5/8/2020", "time": "7:57:47 AM", "client": "188.35.138.205", "flags": "A", "query": "m.addthis.com"}{"date": "5/8/2020", "time": "7:57:47 AM", "client": "186.95.16.121", "flags": "A", "query": "cloud.acrobat.com"} props.conf [monitor:///data/dns/*/*/*/*.json.log] INDEXED_EXTRACTIONS = json KV_MODE = none
We identified there are frequent outages on the Enterprise security server and which causes the Splunk Services down. There is no crash log created whenever it went down. Please advise where to loo... See more...
We identified there are frequent outages on the Enterprise security server and which causes the Splunk Services down. There is no crash log created whenever it went down. Please advise where to look into.
Hi - I need to create an alert where if a process is not running in a linux server , then it should send out an alert : Below query is giving me correct results of all the processess running in a... See more...
Hi - I need to create an alert where if a process is not running in a linux server , then it should send out an alert : Below query is giving me correct results of all the processess running in a server: index="index-name" source=ps host="hostname*" process="*process_name*" | dedup host process | join host [search index="index-name" source=ps host="hostname*" process="*process_name*" | stats latest(host) latest(_time) by host |eval lastSeen='latest(_time)'|fields host lastSeen] |eval status=if(lastSeen<(_time - 300), "not running","running") |table host status process Example Output : Host : hostname Status : running process : process_name But I need to send an alert if the status is not running. Could anybody help me with it?
I would like to create Cache_Hit, Cache_Miss and Revalidate_Hit based on the below and doisplay them in the pie graph with percentages and count values Cache_ Hit is when the field event.cache.ca... See more...
I would like to create Cache_Hit, Cache_Miss and Revalidate_Hit based on the below and doisplay them in the pie graph with percentages and count values Cache_ Hit is when the field event.cache.cacheHit=1 and the field event.cache.cacheStatus!=3 Cache_Miss is when the field event.cache.cacheHit=0 and the field event.cache.cacheStatus=3 Ravlidate_Hit is when the field event.cache.cacheHit=1 and the field event.cache.cacheStatus=3
We have SAI installed on a Searchhead (SH), and Add-ons are installed on a Indexer (IDX) and a HeavyForwarder (HF) in a distributed system. The Add-ons on the IDX created 3 indexes which are em_met... See more...
We have SAI installed on a Searchhead (SH), and Add-ons are installed on a Indexer (IDX) and a HeavyForwarder (HF) in a distributed system. The Add-ons on the IDX created 3 indexes which are em_metrics, infra_alerts, and em_meta We also have a Windows Universal Forwarders (machine1) sending metrics logs to the em_metrics index on IDX. The SAI app on SH can discover entities without issues. But it doesn't show any alert in Alerts tab for alerts we created in the SAI app. Is this something related to infra_alerts index as we received this below message on SH whenever created an alert within the SAI app? Received event for unconfigured/disabled/deleted index=infra_alerts with source="source::machine1-Processor.-_Processor_Time-avg" host="host::SH-name" sourcetype="sourcetype::stash". So far received events from 2 missing index(es) Do we need to install the add-on on SH so that it is able to sees infra_alerts index? If it's the case, we may also need to forward SH data to IDX, right?
We have a SHC of three members & 1 Enterprise Security. Prior to 8.0 each were running their own datamodels. Now that shared datamodel summaries is possible, I would like to set this up to reduce per... See more...
We have a SHC of three members & 1 Enterprise Security. Prior to 8.0 each were running their own datamodels. Now that shared datamodel summaries is possible, I would like to set this up to reduce performance and disk usage but have questions about the exact implementation as the documentation is vague: https://docs.splunk.com/Documentation/Splunk/8.0.3/Knowledge/Sharedatamodelsummaries I know that in datamodels.conf I need to set acceleration.source_guid but that's it. My questions are: Do I set this on the ES? Do I use /opt/splunk/etc/system/local/datamodels.conf? What would be the best way to verify the datamodels have been consolidated down to 1 copy? (right now datamodels are an exact copy of eachother)
I am new to Splunk and I was trying to connect since last weekend..it keep me booting me out..Unable to connect I was using Putty SSH This is what I see Any suggestions?
If I have data in S3 for SmartStore, what happens to that index data if I decide to stop using SmartStore? Will that data go back to the local storage on our indexers? Do we just lose everything in... See more...
If I have data in S3 for SmartStore, what happens to that index data if I decide to stop using SmartStore? Will that data go back to the local storage on our indexers? Do we just lose everything in those buckets?
More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used ... See more...
More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct.
I have a lookup file called template.csv and it has field values, I want to rename a field value with another say mango to banana. how can we do ? Also If I want to replace the field values in 2 l... See more...
I have a lookup file called template.csv and it has field values, I want to rename a field value with another say mango to banana. how can we do ? Also If I want to replace the field values in 2 lookupfiles in template.csv and template2.csv rename field value with mango to banana .Can we do in one query?