All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @PickleRick  Thanks for the help - again I do believe it's not working because of this: "Streamed search execute failed" The reason is:  lookup output is streaming and it can not be the inpu... See more...
Hi @PickleRick  Thanks for the help - again I do believe it's not working because of this: "Streamed search execute failed" The reason is:  lookup output is streaming and it can not be the input for anomaly command. In the search logs i can see: 03-18-2025 17:14:39.482 INFO SearchPhaseGenerator [1439947 searchOrchestrator] - Optimized Search =| search (userid!=null earliest=-1d index=data sourcetype=mydata) | where match(ip,"^\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}$") | lookup maxmind_lookup ip OUTPUT country, region, city | anomalies threshold=0.0001 by field1, field2, ip 03-18-2025 17:14:39.482 INFO ScopedTimer [1439947 searchOrchestrator] - search.optimize 0.001056652 03-18-2025 17:14:39.482 INFO FederatedInfo [1439947 searchOrchestrator] - No federated search providers defined. 03-18-2025 17:14:39.482 INFO PhaseNodeGenerationVisitor [1439947 searchOrchestrator] - FallBackReason: Fallback to 2-phase mode because of empty split key of cmd: anomalies The search fails immediately, its not even really executed. I've tried with CHAT GPT to change the output from lookup for it to be in non-streaming format, but failed (it's not trivial since my lookup is external, not csv file). Still trying to find the right query. Thanks, Michal
Hi @TJLAN  You say this is a new installation of Splunk, are there any files copied from an existing Splunk instance? Just to confirm, is this an update from a previous version or completely vanilla... See more...
Hi @TJLAN  You say this is a new installation of Splunk, are there any files copied from an existing Splunk instance? Just to confirm, is this an update from a previous version or completely vanilla install? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
What instance type (standalone, etc.)?  What platform?  Have you tried updating the license?
You can check this on your existing inputs, if you have acknowledgement enabled you'll have the useAck set to true in your inputs.conf stanzas such as below: [http://answers] disabled = 0 host = mac... See more...
You can check this on your existing inputs, if you have acknowledgement enabled you'll have the useAck set to true in your inputs.conf stanzas such as below: [http://answers] disabled = 0 host = macdev index = answers token = bbe67d25-6eca-41c3-9046-e1e9b75bb571 useAck = true   useACK = <boolean> * When set to "true", acknowledgment (ACK) is enabled. Events in a request are tracked until they are indexed. An events status (indexed or not) can be queried from the ACK endpoint with the ID for the request. * When set to false, acknowledgment is not enabled. * This setting can be set at the stanza level. * Default: false Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
An event does contain the _time field (even if empty). You cannot remove it from the visualization. The only thing you can do is hide it by declaring it invisible with CSS.
Hi @danielbb  As @PickleRick has pointed out in his reply just now, as you have an indexer cluster you should be making changes by pushing your indexer config via a configuration bundles pushed from... See more...
Hi @danielbb  As @PickleRick has pointed out in his reply just now, as you have an indexer cluster you should be making changes by pushing your indexer config via a configuration bundles pushed from your Cluster Manager. This means making changes in the manager-apps/yourOrg_inputs/local/inputs.conf file (or similar) and then pushing a bundle. Splunk will determine if a restart is needed however I think improvements have been made in more recent versions to reduce the number of restarts needed, but there is no guarantee if wont need a restart. When you click "Validate and Check Restart" it should tell you if a restart is required. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Login failed Your license is expired. Please login as an administrator to update the license.
Since you're talking about rolling restart, I suppose you're using indexer cluster. In this case adding an input (as opposed to removing one) might not require you to do a restart (but there are som... See more...
Since you're talking about rolling restart, I suppose you're using indexer cluster. In this case adding an input (as opposed to removing one) might not require you to do a restart (but there are some cases when CM says it will do the restart anyway; that's one of pros for having a layer of HFs before your indexers) As per your other question - you can manipulate several config items, including inputs, using REST API. But you shouldn't do that on a cluster since your config should be consistent across all nodes.
Great @livehybrid, "If you are using config files to create your HEC tokens", what are my options on-prem to configure the HEC token?
Very interesting @livehybrid, how do I check whether indexer acknowledgment is in place?
Hi, I'd like to keep it an event and not a table.
Hi @danielbb  Receiving cooked data from a HF or receiving HEC shouldnt have much impact on the I/O saturation of your disks because Splunk will still write the same amount of data to disk if sent e... See more...
Hi @danielbb  Receiving cooked data from a HF or receiving HEC shouldnt have much impact on the I/O saturation of your disks because Splunk will still write the same amount of data to disk if sent either way. The parsing of HEC data that will be done on your indexers instead of HF may use more CPU/Memory but I do not think disk IO should be affected. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @danielbb  If you are using config files to create your HEC tokens, which I suspect you will be! then Yes you will need to restart Splunk for it to allow the new HEC tokens to work. For more inf... See more...
Hi @danielbb  If you are using config files to create your HEC tokens, which I suspect you will be! then Yes you will need to restart Splunk for it to allow the new HEC tokens to work. For more info check out https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UseHECusingconffiles#:~:text=Restart%20Splunk%20Enterprise%20for%20the%20changes%20to%20take%20effect. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @danielbb  Are you running your infra on-premise or using a cloud service such as AWS? If you are using AWS Firehose to send data to HEC then there are specific requirements for loadbalancing (Se... See more...
Hi @danielbb  Are you running your infra on-premise or using a cloud service such as AWS? If you are using AWS Firehose to send data to HEC then there are specific requirements for loadbalancing (See https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureanELB) Also, if you are using indexer acknowledgement with HEC then you need to ensure that (similar to Firehose sources) that your loadbalancer does cookie-based session stickiness so that the client can connect to the same indexer to check the acknowledgement. Other than that, I believe any modern HTTP Load balancing product should work well. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
We are transitioning from getting the HEC data through HFs to getting it directly to the indexers and we are wondering if upon introducing a new data source are we forced to do an indexer rolling res... See more...
We are transitioning from getting the HEC data through HFs to getting it directly to the indexers and we are wondering if upon introducing a new data source are we forced to do an indexer rolling restart. 
Hi @JohnD-Splunker  I have a suspicion that when you do $PhoneNumber$ ="*" your actually going to end up with *="*" I would suggest updating $PhoneNumber$ to  $PhoneNumber|s$ This adds quotes aro... See more...
Hi @JohnD-Splunker  I have a suspicion that when you do $PhoneNumber$ ="*" your actually going to end up with *="*" I would suggest updating $PhoneNumber$ to  $PhoneNumber|s$ This adds quotes around the value. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Just to confirm, I have added a HTTP_PROXY and HTTPS_PROXY and this TA then routes the requests over the proxy. Ive also made a pull request to the app author's Git repo to enable proxy support dire... See more...
Just to confirm, I have added a HTTP_PROXY and HTTPS_PROXY and this TA then routes the requests over the proxy. Ive also made a pull request to the app author's Git repo to enable proxy support directly in the command, as I think there are other usecases where customers are unable to set the HTTP_PROXY and HTTPS_PROXY env variables. Feel free to use the curl.py from the pull request which will allow you to use proxy param: https://github.com/bentleymi/ta-webtools/pull/34/files Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I'm trying to have the dashboard return all results if the text field is * or return all phone numbers with a partial search in the text box. <input type="text" token="PhoneNumber" searchWhenCha... See more...
I'm trying to have the dashboard return all results if the text field is * or return all phone numbers with a partial search in the text box. <input type="text" token="PhoneNumber" searchWhenChanged="true"> <label>Phone number to search</label> <default>*</default> </input> The search on the dashboard panel: index="cellulardata" | where if ($PhoneNumber$ ="*", (like('Wireless number and descriptions',"%"),like('Wireless Number and descriptions',"%$phonenumber$%" ) | eval Type=if(like(lower('Charge description'), "%text%") OR like(lower('Charge description'), "%ict%"), "Text", "Voice") | eval Direction=if (Type="Voice" AND 'Called city_state' = "INCOMING,CL","Incoming","Outgoing") | eval datetime =Date." ".Time | eval _time=strptime (datetime,"%m/%d/%Y %H:%M") | eval DateTime=strftime(_time, "%m/%d/%y %I:%M %p") | eval To_from=replace (To_from,"\.","") | table DateTime, "Wireless number and descriptions", To_from, Type, Direction |rename "Wireless number and descriptions" as Number | sort -DateTime   The query  returns no results no matter if the text field is empty or not.    I've removed the entry below from the search,  so I know the rest of the search works: where if ($PhoneNumber$ ="*", (like('Wireless number and descriptions',"%"),like('Wireless Number and descriptions',"%$phonenumber$%" I've tried comparing this to other dashboards I've seen and searching google,  but no luck for some reason.    
As @livehybrid already pointed out - there is only replication of lookups (either kvstore-backed or csv-backed within the SHC). The contents of a particular lookup can be sent as part of a knowledge ... See more...
As @livehybrid already pointed out - there is only replication of lookups (either kvstore-backed or csv-backed within the SHC). The contents of a particular lookup can be sent as part of a knowledge bundle to indexer(s) if they are needed for a search. But that's it. If an app on your HF uses kvstore (IIRC some modular inputs do so to store "state"), that instance is completely stand-alone. Depending on your needs there might be some way to "replicate" the contents but it would probably mean treating your HF as SH, spawning a search which would effectively do something like | inputlookup <...> | collect <...> Events created this way would get forwarded to your indexer(s). And on your SHC you'd have to schedule a search which would do the opposite operation - search for the latest indexed events and based on them do outputlookup.
Hi @livehybrid I tried to use the version 8.4 and the same. Luiz