All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Start with this doc: Write Custom Search Commands.
Yes, breaker characters such as white spaces force Splunk to add quotation marks.  If you have mixed values with and without breaker characters, the rex needs to handle both.   | inputlookup ... See more...
Yes, breaker characters such as white spaces force Splunk to add quotation marks.  If you have mixed values with and without breaker characters, the rex needs to handle both.   | inputlookup messages.csv | fields Messages | rename Messages AS search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g s/\"\"/\"/g"     Here is my emulation   | makeresults format=csv data="Messages a b c d e f g" ``` the above emulates | inputlookup messages.csv ```   My result is now search ("a","b c","d","e f g")
You might find this blog post useful: https://www.splunk.com/en_us/blog/tips-and-tricks/protocol-data-inputs.html It describes the Protocol Data Inputs app (https://splunkbase.splunk.com/app/1901) ... See more...
You might find this blog post useful: https://www.splunk.com/en_us/blog/tips-and-tricks/protocol-data-inputs.html It describes the Protocol Data Inputs app (https://splunkbase.splunk.com/app/1901) that performs custom data handling and pre-processing of the received data before it gets indexed by Splunk. It should be possible with this app to write a custom data handler that will accept your ProtoBuf data.
Hi @senthilec566, You can't send a protobuf message directly to the HTTP Event Collector service. If you're working with an application you've developed, you may find what you need in Splunk OpenTel... See more...
Hi @senthilec566, You can't send a protobuf message directly to the HTTP Event Collector service. If you're working with an application you've developed, you may find what you need in Splunk OpenTelemetry Collector at https://github.com/signalfx/splunk-otel-collector and its splunk_hec exporter. There are no currently maintained OTel or protobuf modular inputs, but you may enjoy building or reusing a solution from Vert.x under the Protocol Data Inputs add-on at https://splunkbase.splunk.com/app/1901 . Vert.x provides many modules at https://vertx.io. I've also provided a bespoke protobuf example in the past at https://community.splunk.com/t5/All-Apps-and-Add-ons/Could-Splunk-ingestion-proto-buff-msg-via-HEC-endpoint/m-p/639260/highlight/true#M78877.
Unfortunately this error does not give any reason _why_ the search head cannot connect to the manager node. If the stanza is exactly correct between the working search head and the non-working search... See more...
Unfortunately this error does not give any reason _why_ the search head cannot connect to the manager node. If the stanza is exactly correct between the working search head and the non-working search head, then it could be a network connectivity issue or a firewall issue rather than a splunk issue. Do you see any errors in the _internal logs that may describe the reason why the search head was failing to connect?
@JohnEGones  Here you go, https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configuresearchheadwithserverconf Does my answer above solve your question ? If yes, spare a moment to accept t... See more...
@JohnEGones  Here you go, https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configuresearchheadwithserverconf Does my answer above solve your question ? If yes, spare a moment to accept the answer and vote for it. Thanks.
Hi Fellow Splunkers, Perhaps I can get some different perspective, I am setting up a new standalone SH to be joined to an existing indexer cluster, but I seem to be running into an issue where when ... See more...
Hi Fellow Splunkers, Perhaps I can get some different perspective, I am setting up a new standalone SH to be joined to an existing indexer cluster, but I seem to be running into an issue where when I try to point this server to the idx cluster, specifying the idx CM as the manager [manager_uri], I get an error where the SH will not be joined as a SH node. I am referencing the DOCS here: Enable the search head - Splunk Documentation I also note that there is an existing SH cluster that is joined to the indexer cluster. When I edit the server.conf I get an error that the SH cannot connect to the manager node, even though I have verified and double-checked the stanzas and key values.  From what I have described, what might be the issue?  
While I do agree with @isoutamo that HEC on UF is not supported, it can be configured there. It's just unsupported and everyone pretends it's not there But more to the point, you're apparently tr... See more...
While I do agree with @isoutamo that HEC on UF is not supported, it can be configured there. It's just unsupported and everyone pretends it's not there But more to the point, you're apparently trying to use some compose file which assumes you're working with a full Splunk Enterprise installation. While UF provides some REST API endpoints, it doesn't provide full functionality which is expected.
As @deepakc already mentioned, there are many factors for sizing _any_ Splunk installation, not even going into ES. And with ES even more so. With ES there is much going on "under the hood" even bef... See more...
As @deepakc already mentioned, there are many factors for sizing _any_ Splunk installation, not even going into ES. And with ES even more so. With ES there is much going on "under the hood" even before you enable any correlation searches (manging notables, updating threat intel, updating assets database and so on). Of course for any reasonabe use cases you also need decently configured data (you _will_ want those datamodels accelerated so you will use resources for summary-building searches). And on top of that - there are so many ways you can build a search (any search, not just correlation search in ES) wrong. I've seen several installations of Splunk and ES completely killed with very badly written searches which would be "easily' fixed by rewriting those searches properly. A simple case - I've seen a search written by a team which would not ask their Splunk admins for extracting a field from a sourcetype. So they manually extracted the field from the events each time in their search. Instead of simply writing, for example index=whatever user IN (admin1, admin2, another_admin) which - in a typical case limits the set of processed events pretty good at the start of your search they had to do index=whatever | rex "user: (?<user>\S+)" | search user IN (admin1, admin2, another_admin) Which meant that Splunk had to check for the field through every single event from the given search time range. That was a huge performance hit. That's of course just one of the examples, there are many more antipatterns that you can break your searches with
Sorry, displays data but output is same as my try earlier, latest version is displayed along all env, not specific to that env is displayed.
No , app1 in axe2 value should be 120 not 128, (latest deployed version as per the date timestamp)
No results displayed, but table returns the  same value as my try
Hi if I understand right, you are trying to configure HEC receiver on this node? Splunk UF doesn’t support HEC! When you want to use HEC, the instance must be full splunk enterprise like heavy forw... See more...
Hi if I understand right, you are trying to configure HEC receiver on this node? Splunk UF doesn’t support HEC! When you want to use HEC, the instance must be full splunk enterprise like heavy forwarder. r. Ismo
Hi @Josh1890 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I think this works, thanks
Sizing Splunk for ES use has many factors for performance and to consider, it’s not a one size fits all. As everything is a search, you need to have sufficient resources to cater for users and vari... See more...
Sizing Splunk for ES use has many factors for performance and to consider, it’s not a one size fits all. As everything is a search, you need to have sufficient resources to cater for users and various aspects of the Splunk environment alongside the ES functions. We go by a rule of thumb for ES sizing 100GB Per indexer, I have seen this higher in some cases (so the amount you’re ingesting per day), so try to understand much volume of data ingest comes into you Splunk per day. We typically dedicate the ES SH on its own for large environments, the reason is when data comes in Splunk it will also be placing that data to disk, being a provider of that data, searching the data, running datamodel searches for the correlations rules and there will be dashboards. On top of this you may have other users using the ES or ad-hoc searches - so you can see many aspects to consider (CPU/RAM/IO/Network), otherwise it can become slow and you don’t want that, you need results in a timely fashion. As guide its best for minimum for 16CPU/32G RAM for indexers and SH - as you have 32CPU/32RAM you should be ok as a starting point,  but that does depend on the workload. You also need to check that the disk is SSD and I/Ops is over 800 and ensure you are not sending large volumes of data per day so that your AIO AIO can’t handle all the functions - so keep a check on ingest per day. How to check for Correlation Searches resources consumption? I would start to use the monitoring console(MC) for the usage stats, it’s very comprehensive, it will show the load etc, you can see which searches are consuming memory and this will help you with some aspects of resources.the MC comes with Splunk so it should be on your AIO. - see my links below for refence. Some tips: Ensure you only ingest important data sources on boarded and they are CIM Complaint via the TA's Enable a few data models at time based on your use cases (Correlation rules you want to use) and keep monitoring via the MC checking the load overtime, this will help you keep on top of the resources. Here ‘s some further links on the topics I have mentioned that you should read. ES Performance Reference https://docs.splunk.com/Documentation/ES/7.3.1/Install/DeploymentPlanning MC Reference https://docs.splunk.com/Documentation/Splunk/9.2.1/DMC/DMCoverview Hardware Ref https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware
I'm running universalforwarder as a service in docker, here is my docker-compose config: services:       services: splunkuniversalforwarder: platform: "linux/amd64" hostname: splunk... See more...
I'm running universalforwarder as a service in docker, here is my docker-compose config: services:       services: splunkuniversalforwarder: platform: "linux/amd64" hostname: splunkuniversalforwarder image: splunk/universalforwarder:latest volumes: - opt-splunk-etc:/opt/splunk/etc - opt-splunk-var:/opt/splunk/var - ./splunk/splunkclouduf.spl:/tmp/splunkclouduf.spl ports: - "8000:8000" - "9997:9997" - "8088:8088" - "1514:1514" environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_USER=root - SPLUNK_ENABLE_LISTEN=9997 - SPLUNK_CMD="/opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl" - DEBUG=true - SPLUNK_PASSWORD=<root password> - SPLUNK_HEC_TOKEN=<HEC token> - SPLUNK_HEC_SSL=false​         I have a HTTP Event Collector configured in my Splunk Free Trial account.   When running docker-compose a lot of things seem to be going well and then i hit this:       TASK [splunk_universal_forwarder : Setup global HEC] *************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: POST/services/data/inputs/http/httpadmin********8089{'disabled': '0', 'enableSSL': '0', 'port': '8088', 'serverCert': '', 'sslPassword': ''}NoneNoneNone;;; AND excep_str: No Exception, failed with status code 404: {"text":"The requested URL was not found on this server.","code":404}         I can see no reference to POST/services/data/inputs/http/httpadmin in any Splunk docs Can anyone shed any light on this please? 
I don't know if it makes a difference but your fieldset is not terminated and your earliest and latest aren't referencing the timepicker token correctly.
Hi @tv00638481  following post help to understand stpes need to followed befroe upgrade.  https://community.splunk.com/t5/Installation/What-s-the-order-of-operations-for-upgrading-Splunk-Enterp... See more...
Hi @tv00638481  following post help to understand stpes need to followed befroe upgrade.  https://community.splunk.com/t5/Installation/What-s-the-order-of-operations-for-upgrading-Splunk-Enterprise/td-p/408003  in this case  you need to upgrade first deploymentserver then HF  and UF. compatbility between Splunk cloud and forwaders  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Service/SplunkCloudservice#Supported_forwarder_versions    ---- Regards, Sanjay Reddy ---- If this reply helps you, Karma would be appreciated
Hello Splunk community! I have started my journey with splunk one month ago and I am currently learning Splunk Enterprise Security.  I have a very specific question, I am planning to use about 10-... See more...
Hello Splunk community! I have started my journey with splunk one month ago and I am currently learning Splunk Enterprise Security.  I have a very specific question, I am planning to use about 10-15 correlation searches in my ES and I would like to know if I need to upscale my resources for my Splunk machine, which is ubuntu server 20.04 with 32 GB RAM, 32 vCPU, and 200 GB hard disk.  I am have all-in-one installation scenario because I am just learning the basics of Splunk at the moment, but I would like to know: How much resources do correlation searches in Splunk consume? How much RAM and CPU separately does one average correlation search consume in Splunk Enterprise Security?