All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Set the type field based on which string was found in the event. cf_org_name="ABB" cf_space_name="qa" cf_app_name=*qa-my-app* index=ocf_* "CACHE Hit" OR "CACHE Miss" | eval type=if(searchmatch("CAC... See more...
Set the type field based on which string was found in the event. cf_org_name="ABB" cf_space_name="qa" cf_app_name=*qa-my-app* index=ocf_* "CACHE Hit" OR "CACHE Miss" | eval type=if(searchmatch("CACHE Hit"),"Hit","Miss") | timechart span=1d count by type
Thank you. It is working
Thank you for the  explanation. The rate in seconds you see above  are produced by Loadbalancer upon incoming TCP requests.  The logs are later pushed to splunk for analysis. I d'ont want to carry a... See more...
Thank you for the  explanation. The rate in seconds you see above  are produced by Loadbalancer upon incoming TCP requests.  The logs are later pushed to splunk for analysis. I d'ont want to carry any futher calculation. I just want to extract the rate /sec from the raw and present it upon time (x-axis).
The backfill script may help here.  See https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Managesummaryindexgapsandoverlaps You can put a wrapper around this script that runs it mul... See more...
The backfill script may help here.  See https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/Managesummaryindexgapsandoverlaps You can put a wrapper around this script that runs it multiple times with the appropriate earliest/latest settings.
Hi @Praz_123 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poi... See more...
Hi @Praz_123 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
@gcusello  Thanks for support
Hi @somesoni2 ,    Thanks for your response!    It works, Also is that possible to change the time range picker value as a token based on some conditions If present day is "Monday" and if user s... See more...
Hi @somesoni2 ,    Thanks for your response!    It works, Also is that possible to change the time range picker value as a token based on some conditions If present day is "Monday" and if user selects the option "Exclude weekend", the time range picker should looks for the data on friday If user selects the option "Include weekend", the time range picker should be yesterday Thanks in Advance!
I'm used the SPL you provided but didn't solve it. So I adjusted the SPL, filtered out the same user in a period events, and then filter out Eventcode only 4769 case. index="xx" | transaction user... See more...
I'm used the SPL you provided but didn't solve it. So I adjusted the SPL, filtered out the same user in a period events, and then filter out Eventcode only 4769 case. index="xx" | transaction user maxspan=24h maxpause=10h connected=false | search NOT Eventcode IN (4768,4770,4624) AND Eventcode=4769
Hi @john_snow00, I try to explain: <your_search> --- it's your search, e.g. index=your_index sourcetype=your_sourcetype --- | rex "Rate\s+(?<Bytes>\d+)\/sec" --- Bytes field extraction --- | eval M... See more...
Hi @john_snow00, I try to explain: <your_search> --- it's your search, e.g. index=your_index sourcetype=your_sourcetype --- | rex "Rate\s+(?<Bytes>\d+)\/sec" --- Bytes field extraction --- | eval MB=Bytes/1024/1024 --- change measure of Bytes field from bytes to MB ---- | timechart sum(MB) AS MB --- sum of the traffic foe time periods, it's possible to define this span period --- Ciao. Giuseppe
So I have the following search and I want to create a dashboard with separate columns for "Hits" and "Misses". Seems this should be pretty straightforward but I am lost joins, stats, evals etc: c... See more...
So I have the following search and I want to create a dashboard with separate columns for "Hits" and "Misses". Seems this should be pretty straightforward but I am lost joins, stats, evals etc: cf_org_name="ABB" cf_space_name="qa" cf_app_name=*qa-my-app* index=ocf_* "CACHE Hit" OR "CACHE Miss" | timechart span=1d count by type How can I convert this to a chart with 2 columns which show Hits and Misses per day? Thanks
Hi  @ITWhisperer Sorry for the delay. my expectation is, suppose everyday we have data at 22:00 we need to keep that data and ignore the rest other data. can outlier be the option to ignore th... See more...
Hi  @ITWhisperer Sorry for the delay. my expectation is, suppose everyday we have data at 22:00 we need to keep that data and ignore the rest other data. can outlier be the option to ignore the data coming with different timestamp? please note: it is not always 22:00 data it can we any time but we have to ignore the other timestamp data apart from the usual one.   base search: | mstats sum(Entity.InMessageCount.count.Sum) as count span=1h where index=cloudwatch_metrics AND Namespace=Entity AND Environment=prod AND EntityName="Order.SupplierDepot" AND ServiceDenomination=OutboundBatcher by Namespace, Environment, ServiceDenomination, MetricName, EntityName | where count > 0 Output: _time Namespace Environment ServiceDenomination MetricName EntityName Count 2023-10-06 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 1 2023-10-07 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 2 2023-10-08 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 3 2023-10-09 09:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 4 2023-10-09 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 5 2023-10-10 09:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 6 2023-10-10 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 7 2023-10-11 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 8 2023-10-11 22:00   Entity Test TestBoundBatch TestMessageCount TestOrder.Supplier 9  
Use the fieldsummary command to get the field info then calculate the percentage from that info.  It's not clear which percentage is sought so modify the eventstats and eval commands below as necessa... See more...
Use the fieldsummary command to get the field info then calculate the percentage from that info.  It's not clear which percentage is sought so modify the eventstats and eval commands below as necessary. index=_internal | fieldsummary ``` Get the total number of fields ``` | eventstats sum(count) as Total,sum(distinct_count) as TotalDistinct ``` Compute the percentages ``` | eval Pct=round(count*100/Total,2), DistPct=round(distinct_count*100/TotalDistinct,2)  
Thank you Giuseppe. Can you please explain line after line ? 
The index and sourcetype fields have the advantage of being baked into Splunk - they're present in every event with little to no effort.  The same cannot be said for user-defined fields.  You would n... See more...
The index and sourcetype fields have the advantage of being baked into Splunk - they're present in every event with little to no effort.  The same cannot be said for user-defined fields.  You would need to come up with a way to ensure the my_modular_input_identifier field gets unique values at index time across indexers/HFs while surviving restarts. At the risk of redundancy, I believe sourcetypes should be permanently associated with shapes of data.  If the data shape doesn't change then the sourcetype should not change, either.  Users should not be changing the sourcetype of data without just cause. Similarly, an index is a storage location rather than an attribute of data.  Since users likely are not familiar with the Splunk storage environment or with the access and permissions of indexes, they should not be changing the index names in inputs.  Tell them where their data goes and don't allow changes.
Hi All, Appreciate some suggestions for a problem I'm facing. I have a search which outputs a few results, and what I want to do is, take each results _time and modify the earliest and latest times ... See more...
Hi All, Appreciate some suggestions for a problem I'm facing. I have a search which outputs a few results, and what I want to do is, take each results _time and modify the earliest and latest times to be within +/- 1 minute of the events, and pass on a value from a certain field to a second search.  I have looked at other answers and I can see suggestions for using subsearches, and also the map command. The problem is that though, the events from the original search are not kept this way. With the map command you can pass specific fields from the first search to be kept using further evals, however this gets tedious when you want to keep as many fields as possible. Example: First search (checks for 'file create' events in sysmon:      index=sysmon EventId=11 file_name=test_file* file_name="test_file.txt"     Let's say this produces 3 results, with 3 different times and 3 different users. time 1 test_file.txt user 1 time 2 test_file.txt user 2 time 3 test_file.txt  user 3 Bear in mind there would be other fields too in the actual events. Then what I would like to do is, take time 1 for example, extend the time range by 1 minute either side, and use a second search to pass in the file name and user name to see where this file was downloaded from. Second search:     index=web file_name=test_file.txt earliest=(time1 - 1min) latest=(time1 + 1min) user=user1       This should give me an additional event with the corresponding file download (with url etc.) , whilst keeping the 3 events from the 1st search. So when you look at all events, you would have both the file download event from the web index, and the file create event from sysmon, while keeping all the fields and values from both events.    Appreciate any ideas. Thanks!        
Hi All, we have some process related service like application services running in windows, how can i get those status. ex below    cybAgent.bin event_demon -A as_server -A
Thanks for the reply! This raises up one more question. I wonder about a specific foolproof solutionif you will: If I define a field which always has the same unique value, e.g. my_unique_modular_... See more...
Thanks for the reply! This raises up one more question. I wonder about a specific foolproof solutionif you will: If I define a field which always has the same unique value, e.g. my_unique_modular_input_identifier="A(*#IKF)ApdSAODF)SIEKSD" And  If  I use this fields in all my reports, my tags, etc, instead of using sourcetype or index vaules, then - should this bypass the issue correctly? besides the efficiency problem of querying multiple indexes, is there any other obvious disadvantage to this solution? Is this a common solution for this types of problem? Thanks!    
Hi All, I need help building a SPL that would return all available fields mapped to their sourcetypes/source  Looking across all Indexers crawling through all indexes index=* I currently use to... See more...
Hi All, I need help building a SPL that would return all available fields mapped to their sourcetypes/source  Looking across all Indexers crawling through all indexes index=* I currently use to strip off all the fields and their extracted fields but I have no idea where they are coming from, what is their sourcetype and source: index=* fieldsummary | search values!="[]" | rex field=values max_match=0 "\{\"value\":\"(?<extracted_values>[^\"]+)\"" | fields field extracted_values Thank you!
As I suggested, try the original query with a valid expression. index="xx" | transaction user maxspan=24h maxpause=10h startswith=("Eventcode=4768" OR "Eventcode=4770" OR "Eventcode=4624") endswith... See more...
As I suggested, try the original query with a valid expression. index="xx" | transaction user maxspan=24h maxpause=10h startswith=("Eventcode=4768" OR "Eventcode=4770" OR "Eventcode=4624") endswith="Eventcode=4769" keepevicted=true | search Eventcode=4769 NOT (Eventcode=4768 OR Eventcode=4770 OR Eventcode=4624)
If a datamodel returns "unknown" for a field value it's because the expected field(s) were not found.  Perhaps the 4726 events are not tagged correctly.