All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have some configurations in local app.conf and I would like to read them pragmatically. before streaming events How to do it using python? Thanks!
i forgot is what i'm using apps... so sorry... i tried to universial forwarder apps and try to figure out it  thanks to advice
I'd simply say don't go down this path. sendemail.py is quite well written but a bit confusing for a non-experienced pythoneer. So you'll put a lot of effort for just one use-case. Additionally you'... See more...
I'd simply say don't go down this path. sendemail.py is quite well written but a bit confusing for a non-experienced pythoneer. So you'll put a lot of effort for just one use-case. Additionally you'll get stuck with something you'll have to maintain yourself (what if there are updates to the main sendemail.py? What if there are security fixes? Will you backport those?).  
Splunk on its own is "just" a data analytics platform. But if you want to analyze data you first gotta have it. Splunk can ingest data from a plethora of different sources (and has some own add-ons t... See more...
Splunk on its own is "just" a data analytics platform. But if you want to analyze data you first gotta have it. Splunk can ingest data from a plethora of different sources (and has some own add-ons that can capture metrics from the servers) but we have no way of knowing what kind of data you have in your installation. And BTW it's not a good practice to send events to the main index. If this is your first ever lab Splunk installation it can be understandable but in production it definitely shouldn't happen. You would want your indexes configured so that you can manage your data reasonably.
We cannot tell what data is being stored in your "main" index. You'd have to describe what type of data it is, before asking the meaning of the field values. It would be helpful to have names of apps... See more...
We cannot tell what data is being stored in your "main" index. You'd have to describe what type of data it is, before asking the meaning of the field values. It would be helpful to have names of apps and reporting services, and then hopefully someone in the community will have experience with it.
hi guys i want ask some of the value in "main" Tables,  actually i'm tried to figure out for a some CPU Memory form a one Servers so i tried to like below the SPL   index="main" host="MyServer" ... See more...
hi guys i want ask some of the value in "main" Tables,  actually i'm tried to figure out for a some CPU Memory form a one Servers so i tried to like below the SPL   index="main" host="MyServer" |field _time,host,source,sourcetype,cllection,counter,instance,linecount, object,Value    -- here is the question    so in this case, where's from the value's  low data in server? i try to matched my servers cpu memory form the process exploroer  but i'm not sure.... cause the wave is so fastly shaking can you give me other advice what ever i can solve this question    thanks 
Start with this doc: Write Custom Search Commands.
Yes, breaker characters such as white spaces force Splunk to add quotation marks.  If you have mixed values with and without breaker characters, the rex needs to handle both.   | inputlookup ... See more...
Yes, breaker characters such as white spaces force Splunk to add quotation marks.  If you have mixed values with and without breaker characters, the rex needs to handle both.   | inputlookup messages.csv | fields Messages | rename Messages AS search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g s/\"\"/\"/g"     Here is my emulation   | makeresults format=csv data="Messages a b c d e f g" ``` the above emulates | inputlookup messages.csv ```   My result is now search ("a","b c","d","e f g")
You might find this blog post useful: https://www.splunk.com/en_us/blog/tips-and-tricks/protocol-data-inputs.html It describes the Protocol Data Inputs app (https://splunkbase.splunk.com/app/1901) ... See more...
You might find this blog post useful: https://www.splunk.com/en_us/blog/tips-and-tricks/protocol-data-inputs.html It describes the Protocol Data Inputs app (https://splunkbase.splunk.com/app/1901) that performs custom data handling and pre-processing of the received data before it gets indexed by Splunk. It should be possible with this app to write a custom data handler that will accept your ProtoBuf data.
Hi @senthilec566, You can't send a protobuf message directly to the HTTP Event Collector service. If you're working with an application you've developed, you may find what you need in Splunk OpenTel... See more...
Hi @senthilec566, You can't send a protobuf message directly to the HTTP Event Collector service. If you're working with an application you've developed, you may find what you need in Splunk OpenTelemetry Collector at https://github.com/signalfx/splunk-otel-collector and its splunk_hec exporter. There are no currently maintained OTel or protobuf modular inputs, but you may enjoy building or reusing a solution from Vert.x under the Protocol Data Inputs add-on at https://splunkbase.splunk.com/app/1901 . Vert.x provides many modules at https://vertx.io. I've also provided a bespoke protobuf example in the past at https://community.splunk.com/t5/All-Apps-and-Add-ons/Could-Splunk-ingestion-proto-buff-msg-via-HEC-endpoint/m-p/639260/highlight/true#M78877.
Unfortunately this error does not give any reason _why_ the search head cannot connect to the manager node. If the stanza is exactly correct between the working search head and the non-working search... See more...
Unfortunately this error does not give any reason _why_ the search head cannot connect to the manager node. If the stanza is exactly correct between the working search head and the non-working search head, then it could be a network connectivity issue or a firewall issue rather than a splunk issue. Do you see any errors in the _internal logs that may describe the reason why the search head was failing to connect?
@JohnEGones  Here you go, https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configuresearchheadwithserverconf Does my answer above solve your question ? If yes, spare a moment to accept t... See more...
@JohnEGones  Here you go, https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Configuresearchheadwithserverconf Does my answer above solve your question ? If yes, spare a moment to accept the answer and vote for it. Thanks.
Hi Fellow Splunkers, Perhaps I can get some different perspective, I am setting up a new standalone SH to be joined to an existing indexer cluster, but I seem to be running into an issue where when ... See more...
Hi Fellow Splunkers, Perhaps I can get some different perspective, I am setting up a new standalone SH to be joined to an existing indexer cluster, but I seem to be running into an issue where when I try to point this server to the idx cluster, specifying the idx CM as the manager [manager_uri], I get an error where the SH will not be joined as a SH node. I am referencing the DOCS here: Enable the search head - Splunk Documentation I also note that there is an existing SH cluster that is joined to the indexer cluster. When I edit the server.conf I get an error that the SH cannot connect to the manager node, even though I have verified and double-checked the stanzas and key values.  From what I have described, what might be the issue?  
While I do agree with @isoutamo that HEC on UF is not supported, it can be configured there. It's just unsupported and everyone pretends it's not there But more to the point, you're apparently tr... See more...
While I do agree with @isoutamo that HEC on UF is not supported, it can be configured there. It's just unsupported and everyone pretends it's not there But more to the point, you're apparently trying to use some compose file which assumes you're working with a full Splunk Enterprise installation. While UF provides some REST API endpoints, it doesn't provide full functionality which is expected.
As @deepakc already mentioned, there are many factors for sizing _any_ Splunk installation, not even going into ES. And with ES even more so. With ES there is much going on "under the hood" even bef... See more...
As @deepakc already mentioned, there are many factors for sizing _any_ Splunk installation, not even going into ES. And with ES even more so. With ES there is much going on "under the hood" even before you enable any correlation searches (manging notables, updating threat intel, updating assets database and so on). Of course for any reasonabe use cases you also need decently configured data (you _will_ want those datamodels accelerated so you will use resources for summary-building searches). And on top of that - there are so many ways you can build a search (any search, not just correlation search in ES) wrong. I've seen several installations of Splunk and ES completely killed with very badly written searches which would be "easily' fixed by rewriting those searches properly. A simple case - I've seen a search written by a team which would not ask their Splunk admins for extracting a field from a sourcetype. So they manually extracted the field from the events each time in their search. Instead of simply writing, for example index=whatever user IN (admin1, admin2, another_admin) which - in a typical case limits the set of processed events pretty good at the start of your search they had to do index=whatever | rex "user: (?<user>\S+)" | search user IN (admin1, admin2, another_admin) Which meant that Splunk had to check for the field through every single event from the given search time range. That was a huge performance hit. That's of course just one of the examples, there are many more antipatterns that you can break your searches with
Sorry, displays data but output is same as my try earlier, latest version is displayed along all env, not specific to that env is displayed.
No , app1 in axe2 value should be 120 not 128, (latest deployed version as per the date timestamp)
No results displayed, but table returns the  same value as my try
Hi if I understand right, you are trying to configure HEC receiver on this node? Splunk UF doesn’t support HEC! When you want to use HEC, the instance must be full splunk enterprise like heavy forw... See more...
Hi if I understand right, you are trying to configure HEC receiver on this node? Splunk UF doesn’t support HEC! When you want to use HEC, the instance must be full splunk enterprise like heavy forwarder. r. Ismo
Hi @Josh1890 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors