All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @aditsss, good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @aditsss, sorry, but if you use a fixed (for all events) value for EBNCStatus, you'll have always only one value in this field, so when you'll dedup for this field, you'll always have one value! ... See more...
Hi @aditsss, sorry, but if you use a fixed (for all events) value for EBNCStatus, you'll have always only one value in this field, so when you'll dedup for this field, you'll always have one value! Could you better describe your requirement? Ciao. Giuseppe
Hello, I have installed sysmon and I try to send it with a UniversalForwarder on that machine to my Splunk-Indexer and Search-Head... I have tryed to add      [WinEventLog://Microsoft-Windows-Sy... See more...
Hello, I have installed sysmon and I try to send it with a UniversalForwarder on that machine to my Splunk-Indexer and Search-Head... I have tryed to add      [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0 [WinEventLog://"Applications and Services Logs/Microsoft/Windows/Sysmon/Operational"] disabled = 0 [WinEventLog://Applications and Services Logs/Microsoft/Windows/Sysmon/Operational] disabled = 0     to the inputs.conf, but non of that versions worked... I have also restarted the UniversalForwarder and the Indexer / Search-Head has the Sysmom app installed. What am I doing wong?!   PS.: Sysmon is running and I see the logged data in the Eventviewer of that machine...
| rex mode=sed field=mac "s/(..):(..):(..):(..):(..):(..)/\1-\2-\3-\4-\5-\6/g"
Hi Team, I have below query: index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(se... See more...
Hi Team, I have below query: index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully"|dedup EBNCStatus | table EBNCStatus True I am deduping my EBNC status so when I am selecting date Filter as yesterday its showing one count but when I am selecting 7 days from date filter still showing one count. I want when I select 7 its should show 7 count .  Can someone help me with this,
Hello gcusello, Thanks for your inputs, However, like  I said the use case is I'm looking for IP that is causing maximum number of http errors(400s,500s) , lets say I'm trying to find a single IP th... See more...
Hello gcusello, Thanks for your inputs, However, like  I said the use case is I'm looking for IP that is causing maximum number of http errors(400s,500s) , lets say I'm trying to find a single IP that is causing  over 100 http errors . I think in the query we will have to use eval&case functions too. Please let me know if you need further clarifications on the above. Moh.
Hi, this worked for me, in file ...etc\system\local\web_feature.conf: [feature:dashboards_csp] enable_dashboards_redirection_restriction = false
Hi @mohsplunking, if you need the total count of errors, the solution from @bowesmana is perfect. let us know if we can help you more, or, please, accept one answer for the other people of Communit... See more...
Hi @mohsplunking, if you need the total count of errors, the solution from @bowesmana is perfect. let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks for your response, the goal is to list the IP's that is causing maximum http errors. Lets say where errors are >100.
Hello I have this simple imput that stopped working after renaming the sourcetype  from linux server -> indexers [monitor:///opt/splunk_connect_for_kafka/kafka_2.13-3.5.1/logs/connect.log] disable... See more...
Hello I have this simple imput that stopped working after renaming the sourcetype  from linux server -> indexers [monitor:///opt/splunk_connect_for_kafka/kafka_2.13-3.5.1/logs/connect.log] disabled = false index = _internal sourcetype = kafka_connect_log   I restarted the universal forwarder many times, but it is not helping. Any other troubleshooting steps?    
Hi @Jana42855, the first step is to know the data to search, otherwise it's very difficoult! Anyway, you could start to run a search like the following: index=<your_index> (src=* OR dest_ip=* OR d... See more...
Hi @Jana42855, the first step is to know the data to search, otherwise it's very difficoult! Anyway, you could start to run a search like the following: index=<your_index> (src=* OR dest_ip=* OR dest_port=*) in this way you have all the events containing these fields. then you can analyze them  and identify index and sourcetype to use. Remember that you can see only the indexes where you were enabled, in other words, if you don't have grants to access an index you don't see it. Ciao. Giuseppe
i added: | xyseries CT,foo,countE  to my query i think its ok
Hi,   Try this please : <dashboard version="1.1" theme="dark" script="launcher:nopopup.js">
Hi, are there any plans to make this add-on compatible with Splunk Cloud?
Hi Guys, I'm trying to figure out what are the prerequisite to validate the splunk like  Running Service Name /  Application Name in Control Panel / and Registry path.
You are not doing what I suggested in my first response  Remove the key_field=_key You are explicitly telling it to update the SAME row in KV store
This query can be further modified into this: index="_internal" source="*metrics.log" per_index_thruput series=* NOT ingest_pipe=* |stats sum(kb) as kb values(host) as host by series however ... See more...
This query can be further modified into this: index="_internal" source="*metrics.log" per_index_thruput series=* NOT ingest_pipe=* |stats sum(kb) as kb values(host) as host by series however this query will also show the amount of KBs being logged into indexes via summary indexing (sourcetype=stash), which is supposed to be not charged. Hence, I would prefer this query: index=_internal type=usage idx IN (*) source="*license_usage.log" NOT (h="" OR h=" ")
In order to get metrics index info also: | rest /services/data/indexes count=0 datatype=all
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from wor... See more...
Hi, How can we normalize MAC addresses (such as XX:XX:XX:XX:XX:XX or XX-XX-XX-XX-XX-XX) in our environment before implementing the asset and identity in splunk ES, as we are collecting data from workspace.
The most blunt way to implement this would be to use the constraint on ValueE as subsearch to establish search period (earliest, latest).  I will assume that ValueE and all the other 11 values are al... See more...
The most blunt way to implement this would be to use the constraint on ValueE as subsearch to establish search period (earliest, latest).  I will assume that ValueE and all the other 11 values are already extracted by Splunk.  I will call them other_field01, other_field02, etc. Here is an idea if you are only interested in distinct values of these. index=my_index_plant sourcetype=my_sourcetype_plant [index=my_index_plant sourcetype=my_sourcetype_plant Instrument="my_inst_226" ValueE > 20 | stats min(_time) as earliest max(_time) as latest] | stats values(other_field01) as other_field01 values(other_field02) as other_field02, ... values(ValueE) as ValueE by Instrument Hope this helps.