All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would ask, if all the values are the same for all hosts, then what are you producing a timechart for?
Did you try my suggestion?
Hi, can you please remove the "\" and give a try  
Hi Splunk Community, I'm working on a Django-based website server running inside a Docker container, and I'm facing an issue with OpenTelemetry Collector (Otel) data reception. Despite following the... See more...
Hi Splunk Community, I'm working on a Django-based website server running inside a Docker container, and I'm facing an issue with OpenTelemetry Collector (Otel) data reception. Despite following the official Splunk documentation for installing Otel within a Docker container. I'm encountering an issue where the Otel installed on my VM isn't receiving any data from my Django application. Here are the warning logs from the Otel container: 2024-03-14 04:49:05,592 WARNING [opentelemetry.exporter.otlp.proto.grpc.exporter] [exporter.py:293] [trace_id=0 span_id=0 resource.service.name=website trace_sampled=False] - Transient error StatusCode.UNAVAILABLE encountered while exporting metrics to localhost:4317, retrying in 32s.   Initially, my Dockerfile was configured with OTEL_EXPORTER_OTLP_ENDPOINT='localhost:4317'. Considering that might be the issue, I updated it to OTEL_EXPORTER_OTLP_ENDPOINT='otelcol:4317', aiming to directly communicate with the Otel collector service running as a Docker container. However, I'm still observing attempts to connect to localhost:4317 in the error logs. Here's a brief overview of my setup: Django application running in a Docker container. OpenTelemetry Collector deployed as a separate Docker container named 'otel-collector'. Dockerfile for the Django application updated to use the OpenTelemetry Collector container endpoint. Could anyone provide insights or suggestions on what might be going wrong here? How can I ensure that my Django application correctly sends telemetry data to the Otel Collector? Thank you in advance for your help and suggestions!
@isoutamo As per the PDF shared by you,. The below navigation data belongs to _internal index, and we are currently not getting any events from _internal index. Is there any approach in which I can e... See more...
@isoutamo As per the PDF shared by you,. The below navigation data belongs to _internal index, and we are currently not getting any events from _internal index. Is there any approach in which I can enable the revive the _internal index data in Splunk.  
Hi,   I'm trying to write data to outputlookup file by doing a REST API Call (by running a search query). The below command works and writes data to outputlookup csv file when running the search d... See more...
Hi,   I'm trying to write data to outputlookup file by doing a REST API Call (by running a search query). The below command works and writes data to outputlookup csv file when running the search directly from Splunk. | stats count as field1 | eval field1="host_abc;host_def" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*)" | table host | outputlookup test_maintenance.csv But this is not working when executing the above search using REST API. Getting the below error "Unbalanced quotes" when running the below command curl -k -u admin:admin https://splunksearchnode:8089/servicesNS/admin/search/jobs/export -d search="| stats count as field1 | eval field1=\"host_abc;host_def\" | eval field1=split(field1,\";\") | mvexpand field1 | rex field=field1 \"(?<host>.*)\" | table host | outputlookup test_maintenance.csv" Getting the below error  when running the below command Error : Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '\'host_abc'.</msg></messages></response> curl -k -u admin:admin https://splunksearchnode:8089/servicesNS/admin/search/jobs/export -d search='| stats count as field1 | eval field1=\"host_abc;host_def\" | eval field1=split(field1,\";\") | mvexpand field1 | rex field=field1 \"(?<host>.*)\" | table host | outputlookup test_maintenance.csv' Appreciate your help.   Thank you    
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went t... See more...
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went to the job manager to delete all of my own jobs except the latest queued job I cared about.  Upon deletion of older jobs my queued search did not resume within a reasonable period of time (within 5 seconds).  I then went back to view the job activity monitor and saw that jobs I deleted seconds before were still present.   How long is someone expected to wait until queued jobs resume after deletion of older jobs?  Seems like the desired effect only comes after a matter of minutes, not seconds.  Is this configurable?
Here's a specific example: Say I have a row that looks like: fields _time reserved max_mem-foo max_mem-bar max_mem-bim max_mem-bam I know in advance that all of the max_mem-* values must ... See more...
Here's a specific example: Say I have a row that looks like: fields _time reserved max_mem-foo max_mem-bar max_mem-bim max_mem-bam I know in advance that all of the max_mem-* values must be identical but have no way of knowing the suffixes in advance (e.g., I cannot just hardcode "max_mem-foo" as a workaround). Q: What is the simplest way to get a single max_mem value in a field from the collection of max_mem-* fields?  
I'm not fully understanding your pictured query as you are currently doing an AND query for data in two indexes, which is impossible - so you will get no events from index="a" AND index="app_cim", so... See more...
I'm not fully understanding your pictured query as you are currently doing an AND query for data in two indexes, which is impossible - so you will get no events from index="a" AND index="app_cim", so I can't see how you are getting results. Perhaps can can describe the data you have in what index it exists and the output you are looking for. What I can see is that you are trying to get a count of users who have eventName="xxx" in the join query. Does User come from index=a or index=app_cim, as it appears User comes from Principal{} when it has eventName=xxx - that first search doesn't make a lot of sense at the moment. Lines 4, 9 and 10 have no purpose, but you seem to be trying to do something with _time. Do you actually want to calculate min(_time) as firstTime in your stats? Which events do you want to constrain with the 120 seconds and where is that 120 seconds calculated from.  
Hi @marnall  Thank you again for your help  I did some test, it appears that if we include _time in   | table,   _time within  _raw will follow the _time field, but if we use | table without _tim... See more...
Hi @marnall  Thank you again for your help  I did some test, it appears that if we include _time in   | table,   _time within  _raw will follow the _time field, but if we use | table without _time, the _time will be set to info_min_time This is also the reason why it didn't work when I tried your first suggestion. Can you try the following and let me know the result?    Thank you so much index=_internal | table sourcetype | head 5 | eval othertestfield="test2" | collect index=summary testmode=true addtime=true  
Unfortunately security constraints prevent me from displaying the actual code or any of the error messages.  
The max_mem value will be identical for all hosts, that's why I need to extract a single value for it. The sum of per-host values will be compared to the single pool value in a graph. If I understan... See more...
The max_mem value will be identical for all hosts, that's why I need to extract a single value for it. The sum of per-host values will be compared to the single pool value in a graph. If I understand the "*-*" notation will process all of the fields. I can get an appropriate total for the per-host value via addtotals on reserved-*" to reduce the dozen "reserved-<hostname>" to a single reserved total value. My problem so far has been that the syntax "| stats values( max_mem-* ) as max_mem " fails with Error in Stats command: The number of wildcards between field specifier "max_mem" and the rename specifier "max_mem" do not match. Net result is that I cannot extract the known-single value of max_mem from the list of max_mem-<hostname> fields. Thanks
This is awesome, thanks @bowesmana . And, yes, it was a type for the "count"
The best way to debug props is with the Add Data wizard.   Save some sample events in a file on your workstation then go to Settings->Add Data.  Select "Upload" and choose your sample events file.  S... See more...
The best way to debug props is with the Add Data wizard.   Save some sample events in a file on your workstation then go to Settings->Add Data.  Select "Upload" and choose your sample events file.  Splunk will then upload your file and show how events break with the default settings.  Change the settings on the left and click the Apply button to see how that changes the events.  When you're happy with the props, click the "Save to clipboard" link to show the settings in a modal you can copy-paste into props.conf in your app.
I also tried adding this to the below query but it still pickedup more users from the main query. While I only want it to take into account the users I am getting from sub query. |appendcols [ sea... See more...
I also tried adding this to the below query but it still pickedup more users from the main query. While I only want it to take into account the users I am getting from sub query. |appendcols [ search index="a" eventName="xxx" ***other conditions here** |rename principal{} as User | where firstime > _time | where maxTime < _time | stats count by User]
@bowesmana , any suggestions here would be great , it's like a loop I am stuck in
 now this is giving me result but I want it to pickup the user from subquery and only fetch details from main query for time greater than=120 secs, also there would be multiple users
I'm trying to write some test, but to import the Class we could not bypass from splunk.persistconn.application import PersistantServerApplication
Maybe you could share your query as there is not much anyone can suggest other than do not use "join" as it is not really the way to join things in Splunk.
and there is a basic problem with that search anyway, which is that you are using a field called "count", which does not exist - your timechart will produce a field called dc(symbol). I assume that i... See more...
and there is a basic problem with that search anyway, which is that you are using a field called "count", which does not exist - your timechart will produce a field called dc(symbol). I assume that is a typo and that your real search does dc(symbol) as count