All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the reply. I dont think punct will work for my requirement as I am creating an alert so its not one time thing  , but thanks for looking into it.
The fact that you have Rag_Status and count in your legend would indicate you have done a stats count, not a chart count. See the difference of the tabled output between using this | makeresults co... See more...
The fact that you have Rag_Status and count in your legend would indicate you have done a stats count, not a chart count. See the difference of the tabled output between using this | makeresults count=200 | eval KPI_Score=random() % 100, KPI_Threshold=80, SWC="SWC:".(random() % 5) | eval RAG_Status = case( KPI_Score >= KPI_Threshold, "Green", KPI_Score >= (KPI_Threshold - 5), "Amber", KPI_Score < (KPI_Threshold - 5), "Red" ) | chart count BY SWC RAG_Status | sort SWC and then using stats rather than chart in the second line. In the chart case, you should end up with columns SWC and then Amber, Green and Red, but if you use stats, you will get SWS, RAG_Status and count. In the first case you can stack the data perfectly OK  
@JandrevdM as your search is doing the same search twice just with a different user, you'd be better off just doing a single search and splitting by user, e.g. - similar to your existing search inde... See more...
@JandrevdM as your search is doing the same search twice just with a different user, you'd be better off just doing a single search and splitting by user, e.g. - similar to your existing search index=db_assets sourcetype=assets_ad_users ($user1$ OR $user2$) | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | stats values(Group) as Groups by user which will give you a user column and then a multivalue field with the list of groups If you then want to automatically show the differences between the two users, you can following that with | transpose 0 header_field=user | eval UniqueU1=mvmap(User1, if(User1!=User2,User1,null())) | eval UniqueU2=mvmap(User2, if(User2!=User1,User2,null())) | eval Common=mvmap(User1, if(User1=User2,User1,null())) and it will give you a list of groups unique to user 1, user 2 and the common groups. However, your existing search could be more efficiently done with index=db_assets sourcetype=assets_ad_users ($user1$ OR $user2$) | fields displayName sAMAccountName memberOf | stats latest(*) as * by user | eval memberOf=split(memberOf,",") | rex field=memberOf max_match=0 "CN=(?<Group>.+)" | fields - memberOf If you really want a row by row breakdown of groups, you can do the base search and then just do this | chart count over Group by user | foreach * [ eval <<FIELD>>=if("<<FIELD>>"="Group", <<FIELD>>, if('<<FIELD>>'=1, "Member", "Missing")) ] which will tell you Membership status of each group per user
Thanks for the response, this will not work as I am not searching for any specific text I just shared the sample, it can be anything.
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  ... See more...
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  [ { "name": "ESSENTIAL", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 17, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "ADVANTAGE", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Jul 09,2024 22:49:25 PM" }, { "name": "PREMIER", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Aug 10,2024 21:10:44 PM" }, { "name": "DEVICEADMIN", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "VM", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" } ] Thanks in advance for any help you all might offer to get me down the right track.
You're right in that location based analysis can often highlight interesting things in data. Postal codes are common in many countries. I have used Australian postcodes along with postcode population... See more...
You're right in that location based analysis can often highlight interesting things in data. Postal codes are common in many countries. I have used Australian postcodes along with postcode population density information, to do some covid related dashboards some years ago. It's also possible to do geocoding, e.g. using Google's API https://developers.google.com/maps/documentation/geocoding/overview (there are others), to convert addresses to lat/long and also to then get postcode information. I have used that in the past to do distance calculations using the haversine formula, between GPS coordinates so you can then include a distance element in your events where relevant, e.g. to answer the question "where's the nearest...?" What is the challenge you face - is it getting reliable postcode data from your event data. You can sometimes find good sources of postcode to gps coordinates, I found some Australian downloadable CSV files containing Suburb/Postcode/GPS coordinate data that I used as a lookup dataset which you can then use in your dashboard.  
@Aresndiz The data in Splunk is the data being sent by that machine. What tells you that the data in Splunk is not the same as the data on the server? Splunk wil not change the data coming from your ... See more...
@Aresndiz The data in Splunk is the data being sent by that machine. What tells you that the data in Splunk is not the same as the data on the server? Splunk wil not change the data coming from your server. I note that the table and the event list do not appear to have the same information, e.g. CPU instance 13 has a reading of 9.32 in your table, yet that number does not match any of the event data you show. Is this what you mean? CPU measurements are sometimes difficult to compare - in your example, you show data from a 16 core CPU with individual cores ranging from 7 to 60% and a total of 15%. What is the sampling rate of your readings being sent to Splunk, as that reading represents the average value since the previous reading. If you use a different sampling interval when looking at data on your server you may well see different values, so you need to be comparing like with like.
The use of makeresults is to show examples of how to use a technique, so what you need is the eval statement that sets the field 'color' based on the values of State_after. Add it after your stats co... See more...
The use of makeresults is to show examples of how to use a technique, so what you need is the eval statement that sets the field 'color' based on the values of State_after. Add it after your stats command | eval color=case(State_after="DOWN", "#FF0000", State_after="ACTIVE", "#00FF00", State_after="STANDBY", "#FFBF00")  
Another app in splunkbase for this https://splunkbase.splunk.com/app/7339
There is an awful lot of different UAs and they can introduce themselves in many various ways. It's not standardized in any way. So browser detection is more an art than strict science. And it's even... See more...
There is an awful lot of different UAs and they can introduce themselves in many various ways. It's not standardized in any way. So browser detection is more an art than strict science. And it's even before we take into account that people can spoof their UA strings or even set it to any arbitrary value. There are sites gathering known UA strings though. Like https://explore.whatismybrowser.com/useragents/parse/?analyse-my-user-agent=yes#parse-useragent BTW, your search is very ineffective.
Do something like this to find out which events aren't being counted and adjust your matches accordingly | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("... See more...
Do something like this to find out which events aren't being counted and adjust your matches accordingly | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("*Chrome*Mobile*Safari*"),"Chrome", searchmatch("*firefox*"),"Firefox", searchmatch("*CriOS*safari"),"Safari") | where isnull(browser)
You could try something like this | stats count(eval(match(_raw, "Invalid requestTimestamp"))) as IrT count(eval(match(_raw, "error events found for key"))) as eeffk count(eval(match(_raw, "Exceptio... See more...
You could try something like this | stats count(eval(match(_raw, "Invalid requestTimestamp"))) as IrT count(eval(match(_raw, "error events found for key"))) as eeffk count(eval(match(_raw, "Exception while calling some API ...java.util.concurrent.TimeoutException"))) as toe
Hi You could try to play with punct field. I'm quite sure that it's not exactly what you are looking for but maybe it helps you to find those similarities and you can go forward with those some othe... See more...
Hi You could try to play with punct field. I'm quite sure that it's not exactly what you are looking for but maybe it helps you to find those similarities and you can go forward with those some other ways. See: punct r. Ismo
When rex'ing backslashes, you need to quadruple them | rex "eligible\\\\\":(?<eligibility_status>[^,]+)"
Hi As already said there is a lot of stuff to tweak before you should do it in production, but those are dependent what is your use case. With PoC environment you can start with e.g. this https://la... See more...
Hi As already said there is a lot of stuff to tweak before you should do it in production, but those are dependent what is your use case. With PoC environment you can start with e.g. this https://lantern.splunk.com/Splunk_Platform/Getting_Started/Getting_started_with_Splunk_Enterprise?mt-learningpath=enterprisestart But for real production I propose that you should hire some Splunk Partner or other person who already know what needs to do and how. t. Ismo
What exactly do you mean by "when I do it in Splunk"?
As usual, it depends. Right after installation Splunk can be used and often is - for example - in PoC/PoV scenarios where you just want to show the prospect customer what it can do on a quick and dir... See more...
As usual, it depends. Right after installation Splunk can be used and often is - for example - in PoC/PoV scenarios where you just want to show the prospect customer what it can do on a quick and dirty setup. But such setup will probably quickly hit some problems due to not pre-configuring it. But it's not only about configuration as technical process of setting stuff via gui/conf files/cli/rest api but also about planning your environment.
Not out of the box. Maybe you could do something like that with MLTK but I've never tried it.
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different... See more...
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the unique error string. Sample Events: error events found for key a1 Invalid requestTimestamp abc error event found for key a2 Invalid requestTimestamp def correlationID - 1234 Exception while calling some API ...java.util.concurrent.TimeoutException correlationID - 2345 Exception while calling some API ...java.util.concurrent.TimeoutException Required results: I am looking for the following stats from the above error log statements 1) Invalid requestTimestamp - 2 2) error events found for key - 2 3) Exception while calling some API ...java.util.concurrent.TimeoutException -2
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am think... See more...
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am thinking before I install UF on clients. Configure LDAP and other parameters Create users (Admin and other users)  Identify data ingestion disk partition  Enable Data receiving   Create indexes   Am I missing anything before I install UF and start sending data to the indexer? I have checked the document site but haven't found anything specific about the initial configuration; maybe I am not looking at the right place.  Thanks for your help in advance.