All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am referring to the Deployment Server list.    When I go to [Settings > Forwarder Management > Clients] and click DELETE RECORD on a client - it says this option has been deprecated. I can still ... See more...
I am referring to the Deployment Server list.    When I go to [Settings > Forwarder Management > Clients] and click DELETE RECORD on a client - it says this option has been deprecated. I can still click delete, but the client never goes aways. See my attached screenshot. 
I would like to update my universal forwarders to send data to 2 separate endpoints for 2 separate splunk environments.  How can I do this using my Deployment Server.  I already have an App that I wi... See more...
I would like to update my universal forwarders to send data to 2 separate endpoints for 2 separate splunk environments.  How can I do this using my Deployment Server.  I already have an App that I will use for UF update.
Hi @yuanliu  Thank you for your suggestion. The subsearch has a max 50k limit, not 5k. If one or more subsearches hit the 50k limitation, I'd want to get an email notification indicating which subs... See more...
Hi @yuanliu  Thank you for your suggestion. The subsearch has a max 50k limit, not 5k. If one or more subsearches hit the 50k limitation, I'd want to get an email notification indicating which subsearch exceeded the 50k limit.  In the example  below, an email alert will be sent indicating that 2 subsearches exceed the 50k limit: search3 = 60k rows and search4 = 70k rows.  I can create a scheduled report that sends an email every day, but I am not sure if the report has the ability to send emails only when it meets a certain condition. search1 | join max=0 type=left ip [search ip="10.1.0.0/16" |eval this = "search 2"] | join max=0 type=left ip [search ip="10.2.0.0/16" |eval this = "search 3"] | join max=0 type=left ip [search ip="10.3.0.0/16" |eval this = "search 4"]  
Based on what I can understand, you can try using something like this and tweak it as needed. | makeresults | eval datetime_str="Thu 10 Oct 2024 08:48:12:574 EDT" | eval datetime=strptime(dateti... See more...
Based on what I can understand, you can try using something like this and tweak it as needed. | makeresults | eval datetime_str="Thu 10 Oct 2024 08:48:12:574 EDT" | eval datetime=strptime(datetime_str, "%a %d %b %Y %H:%M:%S:%3N %Z") | eval day_name=strftime(datetime, "%A"), day_of_month=strftime(datetime, "%d"), month=strftime(datetime, "%b"), year=strftime(datetime, "%Y"), week_number=strftime(datetime, "%U"), time_part=strftime(datetime, "%H:%M:%S") | fields datetime_str, datetime, day_name, day_of_month, month, year, week_number, time_part | eval hour=substr(time_part, 1, 2), minute=substr(time_part, 4, 2), second=substr(time_part, 7, 2)    
The reason for your error is "Poorly formatted data" . Regarding INDEXED_EXTRACTIONS=JSON, here is the good article on when/where it can be used. Can you please run this search and show me th... See more...
The reason for your error is "Poorly formatted data" . Regarding INDEXED_EXTRACTIONS=JSON, here is the good article on when/where it can be used. Can you please run this search and show me the output for your sourcetype? index=_internal source=*splunkd.log* AggregatorMiningProcessor OR LineBreakingProcessor OR DateParserVerbose WARN data_sourcetype="my_json" | rex "(?<type>(Failed to parse timestamp|suspiciously far away|outside of the acceptable time window|too far away from the previous|Accepted time format has changed|Breaking event because limit of \d+|Truncating line because limit of \d+))" | eval type=if(isnull(type),"unknown",type) | rex "source::(?<eventsource>[^\|]*)\|host::(?<eventhost>[^\|]*)\|(?<eventsourcetype>[^\|]*)\|(?<eventport>[^\s]*)" | eval eventsourcetype=if(isnull(eventsourcetype),data_sourcetype,eventsourcetype) | stats count dc(eventhost) values(eventsource) dc(eventsource) values(type) values(index) by component eventsourcetype | sort -count  
Hi, Before asking i did try to find but not able to locate the thread that has this kind of datetime values..so i had to come up with this new thread I have the datetime values in string format like... See more...
Hi, Before asking i did try to find but not able to locate the thread that has this kind of datetime values..so i had to come up with this new thread I have the datetime values in string format like Thu 10 Oct 2024 08:48:12:574 EDT   sometimes there may be a null in it - thats how it is  what is that i have to do with this is get/derive into separate columns day name like Thursday day of month like 10 month like Oct year 2024 week - weeknumber like 2 or 3 Time part into separate column like 08:48:12:57  - not worried about EDT separate the time components into again 08 as Hour 48 as Min 12 as Sec not worried about ms still looking for threads with this kind of but...again sorry this is a basic one just needs more searching
Yup sorry, I should have delineated what I have done. Log Examples: Time: 10/10/24 6:30:11.478 AM Start Event: 2024-10-10T06:30:11.478-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View R... See more...
Yup sorry, I should have delineated what I have done. Log Examples: Time: 10/10/24 6:30:11.478 AM Start Event: 2024-10-10T06:30:11.478-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!!   Time: 10/10/24 6:30:11.509 AM End Event: 2024-10-10T06:30:11.509-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!!   index=* ("Start View Refresh (price_vw)" OR "End View Refresh (price_vw)") | transaction startswith="Start View Refresh (price_vw)" endswith="End View Refresh (price_vw)" | table duration Now when I just look for the log events, I get 4 sets of Start and End events. But when run the above for the same duration I was expecting 4 sets of duration, but I get just 2 sets.
Could you please check if the default.xml exists under %SplunkHome%/etc/apps/search/default/data/ui/nav/?
Thank you.   Appreciate your assistance and input on helping me learn the finer details of Splunk and how the logic works.   And yes, the lookup is .csv and not .cvs.  Was a type-o.  I have a sand bo... See more...
Thank you.   Appreciate your assistance and input on helping me learn the finer details of Splunk and how the logic works.   And yes, the lookup is .csv and not .cvs.  Was a type-o.  I have a sand box I work with for Splunk so manually type my searches on my work computer in the Splunk forum to help me learn the syntax better.  Old school way of understanding how to learn something, especially when it comes to code.    Thanks again.
On a side note: long after migrating to wiredTiger we stumbled over some version trouble after upgrading Splunk from 9.1.4 to 9.1.5. It turned out that a simple "touch splunk/var/run/splunk/kvstore_... See more...
On a side note: long after migrating to wiredTiger we stumbled over some version trouble after upgrading Splunk from 9.1.4 to 9.1.5. It turned out that a simple "touch splunk/var/run/splunk/kvstore_upgrade/versionFile42" was able to resolve the problem.
Hi @jroedel , eventtype and tag aren't related to the fields: you have to create at first an eventtype for the login, called e.g. "my_technology_login": index=my_index sourcetype=my_sourcetype ("h... See more...
Hi @jroedel , eventtype and tag aren't related to the fields: you have to create at first an eventtype for the login, called e.g. "my_technology_login": index=my_index sourcetype=my_sourcetype ("has logged onto the server" OR "logged on" and tag it as "Authentication" required BY CIM and "LOGIN". then index=my_index sourcetype=my_sourcetype (logoff OR "has logged off the server") and tag it as "Authentication" required BY CIM and "LOGOUT". The last sample doesn't seems to be a logfail event, please check it and apply as the others. Then you have to extract fields: user and src using regexes. Ciao. Giuseppe
Lets for now focus on a *successful* login. As shown in my initial post, there are multiple events for the same successfull login. One does carry the username, the other carries the source ip. On wh... See more...
Lets for now focus on a *successful* login. As shown in my initial post, there are multiple events for the same successfull login. One does carry the username, the other carries the source ip. On which one should I set the event type and tag? And how do I enrich that event with the field from the other one?
Hi @jroedel , ok, you have to create eventtypes and add to login, logout and logfail eventtypes the tag "authentication. You should try to use the Add-On Builder app (https://splunkbase.splunk.com/... See more...
Hi @jroedel , ok, you have to create eventtypes and add to login, logout and logfail eventtypes the tag "authentication. You should try to use the Add-On Builder app (https://splunkbase.splunk.com/app/2962) or the CIM-Vladiator app (https://splunkbase.splunk.com/app/2968) that helps you in field aliases, calculated fields and tagging. I usually use the second one. Ciao. Giuseppe
Maybe I just do not see it: How would I apply an event type for a successfull login event, that is scattered over multiple log entries? My requirement is, to achieve cim-comliance with this data sou... See more...
Maybe I just do not see it: How would I apply an event type for a successfull login event, that is scattered over multiple log entries? My requirement is, to achieve cim-comliance with this data source.
Hi @waJesu , if host is the host sending the logs and url is a fied in your logs, youcould run something like this: index=your_index sourcetype=your_sourcetype earliest=-24h latest=now host=your_ho... See more...
Hi @waJesu , if host is the host sending the logs and url is a fied in your logs, youcould run something like this: index=your_index sourcetype=your_sourcetype earliest=-24h latest=now host=your_host | stats count BY URL obviously this search depends on the extracted fields. Ciao. Giuseppe
Hi @whitecat001 , sorry but there's something strange in your request: are you speaking about how to ingest json files in a forwarder or how to parse json files in Search Heads Cluster? you spoke ... See more...
Hi @whitecat001 , sorry but there's something strange in your request: are you speaking about how to ingest json files in a forwarder or how to parse json files in Search Heads Cluster? you spoke of Deployer that's used in Search Head Cluster, if your requirement is to parse a json file, you have to add INDEXED_EXTRACTIONS=JSON to the sourcetype using the web gui and this configuration is replicated to the hother Search Heads. If instead you are speaking about how to ingest and parse a json file, you have to add INDEXED_EXTRACTIONS = JSON to the sourcetype in props.conf, in the forwarders using the Deployment Server or manually. What's your requirement? Ciao. Giuseppe
Am having trouble getting a .json file into splunk through the backend to help support a customized dashboard. Is there a particular step i need to follow to get it in through the deployer?
I need a query that lists URLs a particular host has reached out in a particular time e.g in the last 24 hours. Please help
Hi @jroedel , you can create an eventtype for login and one for logout, tagging these eventtypes with a related tag, so you can use then in your searches. but what's your requirement? what do you... See more...
Hi @jroedel , you can create an eventtype for login and one for logout, tagging these eventtypes with a related tag, so you can use then in your searches. but what's your requirement? what do you need to receive as result? Ciao. Giuseppe
I have onboarded data from a system,  that scatters actual events over many logging events. Especially successful or failed logins cause me some headache. Successful login: <timestamp> Connection '... See more...
I have onboarded data from a system,  that scatters actual events over many logging events. Especially successful or failed logins cause me some headache. Successful login: <timestamp> Connection 'id123' from '192.168.1.100' has logged onto the server
 <timestamp> User 'johndoe' logged on (Connection id='id123')
 [ Time passes until John eventually decides to logoff again] 
<timestamp> Connection 'id123' from has logged off the server Failed login: <timestamp> Connection 'id123' from '192.168.1.100' has logged onto the server
 <timestamp> Connection 'id123' from has logged off the server   Of course, I can fiddle around with transaction or even stats or whatever to list successful and failed logins or create an alert for it. However that is absolutely not elegant. What is best practice, to get those data nicely streamlined with eventtypes and tags?