All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @uagraw01 , yes, as I said, i experienced this issue in some Splunk installations when there was a queue congestion in Splunk Data Flow from the Forwarders to the Indexers. In these cases, the _... See more...
Hi @uagraw01 , yes, as I said, i experienced this issue in some Splunk installations when there was a queue congestion in Splunk Data Flow from the Forwarders to the Indexers. In these cases, the _internal logs have a less priority than the other logs so they arrive late or they don't arrive. You can check the queue on your forwarders using a simple search: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time Ciao. Giuseppe
Hi have you check / asked it there is Splunk Workload management rules implemented for this search? r. Ismo
You could look that same data from OS level from some of those log under $SPLUNK_HOME/var/log/splunk/ There are at least splunkd.log metrics.log etc. Those contains all same data as you have in _int... See more...
You could look that same data from OS level from some of those log under $SPLUNK_HOME/var/log/splunk/ There are at least splunkd.log metrics.log etc. Those contains all same data as you have in _internal. Of course you must have shell level access to those all source hosts to see this. You should just look couple of pages later where is said "Using "grep" cli command". In that and some pages after that is told/show how you can do it on command line with those log files like metrics.log.
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I... See more...
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I’m using a 5-minute time range. I noticed that this error seems to pop up when we use latest=now() in our queries to get the most recent data.However, when I tried the same query with a specific time range, like earliest=-xxh@h latest=-xxh@h, it worked just fine. Any ideas on why latest=now() might not be fetching results as expected? And if there is any resolution to working with latest=now()
This is working when we query directly from Splunk Search..  | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex fi... See more...
This is working when we query directly from Splunk Search..  | stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*),(?<mode>.*)" | table host mode | outputlookup atlassian_maintenance.csv   But when we try hitting using curl and its failing .  curl -k -u admin:Vzadmin@12 https://dallpsplsh01sp.tpd-soe.net:8089/servicesNS/admin/SRE/search/jobs/export -d search="| stats count as field1 | eval field1="dallvcflwb110u,yes;dallvcflwb120u,yes" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*),(?<mode>.*)" | table host mode | outputlookup atlassian_maintenance.csv"   -bash: syntax error near unexpected token `?'        
Hi, Can you please use the mail server parameter with the email ID as mentioned in the below docs. server="server info" https://docs.splunk.com/Documentation/Splunk/8.1.0/Alert/Emailnotification ... See more...
Hi, Can you please use the mail server parameter with the email ID as mentioned in the below docs. server="server info" https://docs.splunk.com/Documentation/Splunk/8.1.0/Alert/Emailnotification  
I would ask, if all the values are the same for all hosts, then what are you producing a timechart for?
Did you try my suggestion?
Hi, can you please remove the "\" and give a try  
Hi Splunk Community, I'm working on a Django-based website server running inside a Docker container, and I'm facing an issue with OpenTelemetry Collector (Otel) data reception. Despite following the... See more...
Hi Splunk Community, I'm working on a Django-based website server running inside a Docker container, and I'm facing an issue with OpenTelemetry Collector (Otel) data reception. Despite following the official Splunk documentation for installing Otel within a Docker container. I'm encountering an issue where the Otel installed on my VM isn't receiving any data from my Django application. Here are the warning logs from the Otel container: 2024-03-14 04:49:05,592 WARNING [opentelemetry.exporter.otlp.proto.grpc.exporter] [exporter.py:293] [trace_id=0 span_id=0 resource.service.name=website trace_sampled=False] - Transient error StatusCode.UNAVAILABLE encountered while exporting metrics to localhost:4317, retrying in 32s.   Initially, my Dockerfile was configured with OTEL_EXPORTER_OTLP_ENDPOINT='localhost:4317'. Considering that might be the issue, I updated it to OTEL_EXPORTER_OTLP_ENDPOINT='otelcol:4317', aiming to directly communicate with the Otel collector service running as a Docker container. However, I'm still observing attempts to connect to localhost:4317 in the error logs. Here's a brief overview of my setup: Django application running in a Docker container. OpenTelemetry Collector deployed as a separate Docker container named 'otel-collector'. Dockerfile for the Django application updated to use the OpenTelemetry Collector container endpoint. Could anyone provide insights or suggestions on what might be going wrong here? How can I ensure that my Django application correctly sends telemetry data to the Otel Collector? Thank you in advance for your help and suggestions!
@isoutamo As per the PDF shared by you,. The below navigation data belongs to _internal index, and we are currently not getting any events from _internal index. Is there any approach in which I can e... See more...
@isoutamo As per the PDF shared by you,. The below navigation data belongs to _internal index, and we are currently not getting any events from _internal index. Is there any approach in which I can enable the revive the _internal index data in Splunk.  
Hi,   I'm trying to write data to outputlookup file by doing a REST API Call (by running a search query). The below command works and writes data to outputlookup csv file when running the search d... See more...
Hi,   I'm trying to write data to outputlookup file by doing a REST API Call (by running a search query). The below command works and writes data to outputlookup csv file when running the search directly from Splunk. | stats count as field1 | eval field1="host_abc;host_def" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*)" | table host | outputlookup test_maintenance.csv But this is not working when executing the above search using REST API. Getting the below error "Unbalanced quotes" when running the below command curl -k -u admin:admin https://splunksearchnode:8089/servicesNS/admin/search/jobs/export -d search="| stats count as field1 | eval field1=\"host_abc;host_def\" | eval field1=split(field1,\";\") | mvexpand field1 | rex field=field1 \"(?<host>.*)\" | table host | outputlookup test_maintenance.csv" Getting the below error  when running the below command Error : Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '\'host_abc'.</msg></messages></response> curl -k -u admin:admin https://splunksearchnode:8089/servicesNS/admin/search/jobs/export -d search='| stats count as field1 | eval field1=\"host_abc;host_def\" | eval field1=split(field1,\";\") | mvexpand field1 | rex field=field1 \"(?<host>.*)\" | table host | outputlookup test_maintenance.csv' Appreciate your help.   Thank you    
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went t... See more...
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went to the job manager to delete all of my own jobs except the latest queued job I cared about.  Upon deletion of older jobs my queued search did not resume within a reasonable period of time (within 5 seconds).  I then went back to view the job activity monitor and saw that jobs I deleted seconds before were still present.   How long is someone expected to wait until queued jobs resume after deletion of older jobs?  Seems like the desired effect only comes after a matter of minutes, not seconds.  Is this configurable?
Here's a specific example: Say I have a row that looks like: fields _time reserved max_mem-foo max_mem-bar max_mem-bim max_mem-bam I know in advance that all of the max_mem-* values must ... See more...
Here's a specific example: Say I have a row that looks like: fields _time reserved max_mem-foo max_mem-bar max_mem-bim max_mem-bam I know in advance that all of the max_mem-* values must be identical but have no way of knowing the suffixes in advance (e.g., I cannot just hardcode "max_mem-foo" as a workaround). Q: What is the simplest way to get a single max_mem value in a field from the collection of max_mem-* fields?  
I'm not fully understanding your pictured query as you are currently doing an AND query for data in two indexes, which is impossible - so you will get no events from index="a" AND index="app_cim", so... See more...
I'm not fully understanding your pictured query as you are currently doing an AND query for data in two indexes, which is impossible - so you will get no events from index="a" AND index="app_cim", so I can't see how you are getting results. Perhaps can can describe the data you have in what index it exists and the output you are looking for. What I can see is that you are trying to get a count of users who have eventName="xxx" in the join query. Does User come from index=a or index=app_cim, as it appears User comes from Principal{} when it has eventName=xxx - that first search doesn't make a lot of sense at the moment. Lines 4, 9 and 10 have no purpose, but you seem to be trying to do something with _time. Do you actually want to calculate min(_time) as firstTime in your stats? Which events do you want to constrain with the 120 seconds and where is that 120 seconds calculated from.  
Hi @marnall  Thank you again for your help  I did some test, it appears that if we include _time in   | table,   _time within  _raw will follow the _time field, but if we use | table without _tim... See more...
Hi @marnall  Thank you again for your help  I did some test, it appears that if we include _time in   | table,   _time within  _raw will follow the _time field, but if we use | table without _time, the _time will be set to info_min_time This is also the reason why it didn't work when I tried your first suggestion. Can you try the following and let me know the result?    Thank you so much index=_internal | table sourcetype | head 5 | eval othertestfield="test2" | collect index=summary testmode=true addtime=true  
Unfortunately security constraints prevent me from displaying the actual code or any of the error messages.  
The max_mem value will be identical for all hosts, that's why I need to extract a single value for it. The sum of per-host values will be compared to the single pool value in a graph. If I understan... See more...
The max_mem value will be identical for all hosts, that's why I need to extract a single value for it. The sum of per-host values will be compared to the single pool value in a graph. If I understand the "*-*" notation will process all of the fields. I can get an appropriate total for the per-host value via addtotals on reserved-*" to reduce the dozen "reserved-<hostname>" to a single reserved total value. My problem so far has been that the syntax "| stats values( max_mem-* ) as max_mem " fails with Error in Stats command: The number of wildcards between field specifier "max_mem" and the rename specifier "max_mem" do not match. Net result is that I cannot extract the known-single value of max_mem from the list of max_mem-<hostname> fields. Thanks
This is awesome, thanks @bowesmana . And, yes, it was a type for the "count"
The best way to debug props is with the Add Data wizard.   Save some sample events in a file on your workstation then go to Settings->Add Data.  Select "Upload" and choose your sample events file.  S... See more...
The best way to debug props is with the Add Data wizard.   Save some sample events in a file on your workstation then go to Settings->Add Data.  Select "Upload" and choose your sample events file.  Splunk will then upload your file and show how events break with the default settings.  Change the settings on the left and click the Apply button to see how that changes the events.  When you're happy with the props, click the "Save to clipboard" link to show the settings in a modal you can copy-paste into props.conf in your app.