All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
If you don't know which props.conf file has the sourcetype, use btool to find it. splunk btool --debug props list tos_access That will show all of the settings that apply to the sourcetype.  The --... See more...
If you don't know which props.conf file has the sourcetype, use btool to find it. splunk btool --debug props list tos_access That will show all of the settings that apply to the sourcetype.  The --debug option adds the name of the file where the setting is made. If you don't see the tos_access sourcetype in the Splunk Cloud UI (Settings->Source types) then there are no settings for it, which might very well explain the difference in behavior.
Try with fillnull | tstats count as Total where index="abc" by _time, Type span=1d | timechart span=1d max(Total) as Total by Type | fillnull value=0 | untable _time Type Total
Same output
Hi @gcusello     Could you help me how to change the current data entity dropdown behaviour to  multi select option…with the above query i.e second dropdown?  2.Also how do i auto cleared the exis... See more...
Hi @gcusello     Could you help me how to change the current data entity dropdown behaviour to  multi select option…with the above query i.e second dropdown?  2.Also how do i auto cleared the existing search result pannel ,when i select next time the other option from domain entity
Hello All I need to send a request to Splunk API from a Linux server but the Curl is complaining because the search argument is too long (could be up to 500000 chars). my question is: how we can us... See more...
Hello All I need to send a request to Splunk API from a Linux server but the Curl is complaining because the search argument is too long (could be up to 500000 chars). my question is: how we can use @myFile.spl to query splunk api? This is what I have done so far but no luck yet   curl --noproxy '*' -k -H "Authorization: Splunk myToken" https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d search=`echo $myVar`  error  Argument list too long curl --noproxy '*' -k -H "Authorization: Splunk myToken" https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d  @query2.spl  (Format1 in query2.spl file--> "search= | search index=myIndex ...."    up to 500000 char) error {"messages":[{"type":"FATAL","text":"Empty search."}]} curl --noproxy '*' -k -H "Authorization: Splunk myToken"  https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d  @query2.spl  (Format2 in query2.spl file --> search= "| search index=myIndex ...."    up to 500000 char -- difference with 3 is quotes position) error {"messages":[{"type":"ERROR","text":"Error in 'SearchParser': Missing a search command before '\"'. Error at position '0' of search query '\"| search index...." curl --noproxy '*' -k -H "Authorization: Splunk myToken" https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d search=@query2.spl   (Format2 in query2.spl file --> "| search index=myIndex ...."    up to 500000 char -- difference with 3 is quotes position) error {"messages":[{"type":"ERROR","text":"Error in 'SearchParser': Missing a search command before '@'. Error at position '0' of search query '@query2.spl'.","help":""}]}
Hi I have a basic questions about the inputs.conf file In our apps, we have a inputs.conf file under etc/apps/test/inputs.conf what is normal But what is the difference between etc/system/local/i... See more...
Hi I have a basic questions about the inputs.conf file In our apps, we have a inputs.conf file under etc/apps/test/inputs.conf what is normal But what is the difference between etc/system/local/inputs.conf and etc/apps/test/inputs.conf ? Is the inputs.conf file under system is an agrégation of all the inputs.conf files of every apps? And which inputs.conf file is taking into account first? The one in system or the one in the apps? Regards
If a Splunk Search Head image is destroyed, but the rest of the servers/components are up and running including the Deployment, Indexers, HWF and License server. Would it be possible to rebuild just ... See more...
If a Splunk Search Head image is destroyed, but the rest of the servers/components are up and running including the Deployment, Indexers, HWF and License server. Would it be possible to rebuild just the Search Head or does the entire Splunk environment need to be rebuilt?
Try something like this | tstats count as Total where index="abc" by _time, Type span=1d | timechart span=1d max(Total) as Total by Type | untable _time Type Total
without using lookup..atleast i need in updated or notupdated fields
You could try extracting just that part from your events. If you want help doing that, you should share some raw events in a code block </> to preserve formatting.
Hi, recently I tried to configure Splunk TA and APP for DELL EMC ECS Storage device. After Installation and initial configuration everything looks fine except lacking performance data. Capacity and i... See more...
Hi, recently I tried to configure Splunk TA and APP for DELL EMC ECS Storage device. After Installation and initial configuration everything looks fine except lacking performance data. Capacity and inventory data are available in DELL EMC ECS App overview dashboard but I'm lacking performance data.  Here's few details about the current settings and scenario: - App and Add On are installed on Search Head Cluster, and Add on installed on HF - Splunk is 9.0.5 version (higher than reccomended in add on documentation) - App and Addons are updated to latest versions - DELL EMC ECS Device is running 3.7.0.3 - There is communication between devices (obviously as we are ingested some data from ECS device) - Splunk Python version on HF is 3.7 (OS version is 2.7 I don't think it matters :P) - Management user with system monitor privileges is created on DELL EMC ECS device and used by splunk - As documentation claims I provided IP of one of the nodes in VDC (not virtual IP) - I even raised timeout setting in  ecs_connect.py to 60 (but there were no timeout errors in the logs)   I know that performance data should be ingested via flux api and I see some data from various dell:flux:* sources but it seems not to be the date expected by Dell EMC ECS overview dashboard (or it is not parsed correctly, which I really doubt as it's official supported APP from splunkbase).   In logs I can see one error that occurs everytime when Dell EMC ECS input data ingestion happens. The error is "product version" which seems really vague for me. Apart from that I can't find anything useful in the logs.  As far as I know nothing else has to be done on the device itself so I'm kind of lost  ... Do You have any ideas what else can I check or maybe You encountered similar problems ? P.S I contacted App vendor and waiting for any response if I find solution in the meantime I'll put it here for potential other people who may encounter this really weird problem . Best Regards  
Does your new admin account have the admin role associated? If not, did you look at the roles associated and make sure all of the db_connect items are allowed?
Hello, We are planning to onboard inventory data from Vcenter. I would like to confirm whether "Splunk Add-on for vCenter Logs" has inventory data. If not, can you please point me to the right add... See more...
Hello, We are planning to onboard inventory data from Vcenter. I would like to confirm whether "Splunk Add-on for vCenter Logs" has inventory data. If not, can you please point me to the right add on.  Thanks
IF you don't need to compare to a lookup with specific indexes, and all you want to return is a list of indexes with no logs in the last 7 days, try run this search over a 30 day window: | tstats la... See more...
IF you don't need to compare to a lookup with specific indexes, and all you want to return is a list of indexes with no logs in the last 7 days, try run this search over a 30 day window: | tstats latest(_time) as latestTime where index=* by index | where latestTime < relative_time(now(), "-7d@d") | convert timeformat="%Y/%m/%d" ctime(latestTime)   It will only show indexes that have logged in the past 30 days but have stopped more than 7 days ago.
Did you get this resolved ? @rmorrison6 
Came here because we're seeing the same issue in the Lookup Editor app after upgrading to 9.1.1. Users have no way to access their lookups as of now and we really don't want to roll anything back if ... See more...
Came here because we're seeing the same issue in the Lookup Editor app after upgrading to 9.1.1. Users have no way to access their lookups as of now and we really don't want to roll anything back if we don't have to..   @splunk we would really appreciate an answer here..
Hello, How to fill the gaps from days with no data in tstats + timechart query? Query: | tstats count as Total where index="abc"  by _time, Type span=1d Getting: Required:   Please su... See more...
Hello, How to fill the gaps from days with no data in tstats + timechart query? Query: | tstats count as Total where index="abc"  by _time, Type span=1d Getting: Required:   Please suggest   Thank You  
on the EC2 instances there are a few props.conf in the apps/<appname>/default dir. But etc/system/local doesn't have a props.conf From what I can tell, most of the apps are disabled and their prop... See more...
on the EC2 instances there are a few props.conf in the apps/<appname>/default dir. But etc/system/local doesn't have a props.conf From what I can tell, most of the apps are disabled and their props.conf shouldn't apply to my source type. I can't access the props.conf of the Splunk Cloud stack. But I can tell from the admin-UI that there is nothing configured for my sourcetype either. It's a custom sourcetype I'll check regarding the heavy forwarder. I'm not aware of those. But I can't rule that out either. Thanks for the help!
I just upgraded my dev environment to Splunk Enterprise 9.1.0.1 and add-on builder 4.1.3  And I cannot reproduce this error. What versions are You using?