All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Module 4 lab fundamentals using splunk cloud there is no icon for "add data." Yes, I'm logged in as the admin. Again, I am using splunk cloud to do the fundamentals training. Please advise... See more...
Hello, Module 4 lab fundamentals using splunk cloud there is no icon for "add data." Yes, I'm logged in as the admin. Again, I am using splunk cloud to do the fundamentals training. Please advise, Alan
I am a veteran and I am trying to take advantage of the free training for Splunk Fundamentals II. I created a personal account with my personal email address.  I sign to the site with no problem but... See more...
I am a veteran and I am trying to take advantage of the free training for Splunk Fundamentals II. I created a personal account with my personal email address.  I sign to the site with no problem but when I try to verify that I am vet it says I completed the verification process but at the bottom of that verification box theres a message. ...then i get an email to the same personal account telling me someone is trying to use my account... and its me. I really dont know how to fix this issue and i'm looking for assistance. the email is from workforcetraining@splunk.com  thanks in advance!
I have a very simple search: index=logs_glbl sourcetype=kube:container:app-name namespace=prod status=500 | stats count Result: 1 Results are coming from below sample logs: ::ffff:10.244.3.38 - -... See more...
I have a very simple search: index=logs_glbl sourcetype=kube:container:app-name namespace=prod status=500 | stats count Result: 1 Results are coming from below sample logs: ::ffff:10.244.3.38 - - [06/Aug/2020:20:14:03 +0000] "GET /api/v1/workspace/getEngagement2?id=123 HTTP/1.1" 500 39 "https://atlas.intenal.noman.com" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36" I have defined a Field type: status for the above which uses Inline Field Extraction:  ^[^"\n]*"(?P<method>\w+)[^"\n]*"\s+(?P<status>\d+) Now when I perform a new search index=logs_glbl sourcetype=kube:container:app-name namespace=prod | stats count by status I don’t get the status 500 error. My results exclude the 500 status. It is also probably missing other http statuses too. status count 200 515 302 152 304 8 401 71 409 7
I have set up an Alert for a stats expression like this:   | stats element_name count    This is triggered each time the Alert was scheduled to give a summary of certain events. However, if Trig... See more...
I have set up an Alert for a stats expression like this:   | stats element_name count    This is triggered each time the Alert was scheduled to give a summary of certain events. However, if Trigger is set to Once, I only get the first row. If I set it to For each result, then I get one POST per row. This gets the data over, but the receiver then needs to put these separate things back together (e.g. by matching SID or some such), I'd rather they were all sent in the same package to begin with. Is there some expression (say using eval) that I can add that would convert the table into a single item that would be returned when Trigger is sent to Once, e.g.:   "result: { "table": "{'thing_1': 387, 'thing_2': 88}" }    
Here is a link the dataset and the regex.  It is working on regexr but not in transforms.conf.  I have tested by using . as my regex and it then sends all logs to the nullqueue so I know the stanzas ... See more...
Here is a link the dataset and the regex.  It is working on regexr but not in transforms.conf.  I have tested by using . as my regex and it then sends all logs to the nullqueue so I know the stanzas are correct, it's a problem with the regex and I have not been able to figure it out. https://regexr.com/59qu2 Here are my stanzas from props.conf and transforms.conf props.conf [cs_replicator] TRANSFORMS-CS = EliminateCS2   Transforms.conf [EliminateCS2] REGEX = (?:{"ScreenshotsTakenCount".*|{"ProcessCreateFlags").* DEST_Key = queue FORMAT = nullQueue Any help is appreciated.  
I would like to confirm what TIME the throttling window duration is using. is it based on the trigger time or on event time. Also, what is the different than throttling 1 day or 24 hours. We have a ... See more...
I would like to confirm what TIME the throttling window duration is using. is it based on the trigger time or on event time. Also, what is the different than throttling 1 day or 24 hours. We have a correlation search, with real-time scheduling. Trigger alert when number of results is greater than 0; Throttling window duration is set to 1 day.  Search is scheduled to run at 15 * * * *  for time range -5m@m to -65m@m The first notable was triggered on Aug 4 at 16:17, which reported the event occurred at 15:36 on Aug 4. The second event occurred on Aug 5 at 16:08, which didn't trigger the notable. When i reran the SPL manually for the time range between 15:10 - 16:10, the search returned result. What would be the reason of not seeing alert? Throttling? or event not received in splunk when search was running?   Thanks!
We are migrating our Load Balance device and need to move all the VIP's to different Load Balance with new VIP. We have controller installed in HA and we need to understand where we can change in VI... See more...
We are migrating our Load Balance device and need to move all the VIP's to different Load Balance with new VIP. We have controller installed in HA and we need to understand where we can change in VIP or IP in the AppDynamics application. In terms of local files or EC.  Is there any way we can update the agent configuration in  terms of controller VIP from controller it self ?
I am looking for a single pane or Dashboard to show all the alerts from all the application we are monitoring. Please help with the steps of configuring such Dashboards.  1. AppD version - 4.5.18 2... See more...
I am looking for a single pane or Dashboard to show all the alerts from all the application we are monitoring. Please help with the steps of configuring such Dashboards.  1. AppD version - 4.5.18 2. Single Tenant Env
This is probably a really simple question but I have events coming in every minute. I've used  | rex field=_raw .... to extract a field from those events. How do I plot these field values over time... See more...
This is probably a really simple question but I have events coming in every minute. I've used  | rex field=_raw .... to extract a field from those events. How do I plot these field values over time? This is what my statistics look like.  I've tried  | timechart span=1m values(Last_Heartbeats) but nothing shows up on my line graph. I've also tried it without a span, I've tried | stats with a | bucket command, I've tried dc(Last_Heartbeats) but I can't figure it out.
Hi All, I have inherited a HF running on a Linux server collecting data from several cloud sources using the inputs from below TAs, that need to be moved to a newly built Linux server (no Splunk ver... See more...
Hi All, I have inherited a HF running on a Linux server collecting data from several cloud sources using the inputs from below TAs, that need to be moved to a newly built Linux server (no Splunk version upgrades). azure_event_hub azure_security_center_input digital_shadows_searchlight microsoft_graph_security MS_AAD_audit MS_AAD_signins mscs_azure_audit mscs_azure_resource splunk_ta_o365_management_activity windows_defender_atp_alerts Can you please recommend any procedures and best practices to make sure there is no data duplication ? Thinking of the below ways, will any of these work and which is better ? 1.     a. Stop Splunk on old host and copy Splunk directory to new host.     b. Change the splunk server/instance name to match the new host.     c. Start splunk on the new host. 2. Install fresh Splunk on new host, and configure TAs, is there a way to move any checkpoints (or something similar to fishbuckets ? ) from the old HF, so that the TAs pull data from where it was stopped on the existing HF ? Thanks a lot in advance Chaith
Hello All, Recently, I observed error messages on my search head like "Unable to distribute to peer named XXX at URI https://xx:8089 because replication was unsuccessful. replicationStatus Failed fa... See more...
Hello All, Recently, I observed error messages on my search head like "Unable to distribute to peer named XXX at URI https://xx:8089 because replication was unsuccessful. replicationStatus Failed failure info: Dispatch Command: Search Bundle throttling is occuring because the limit for number of bundles with pending lookups for indexing has been exceeded. This could be the result of large lookup files updating faster than Splunk software can index them. Throttling ends when this instance has caught up with indexing of lookups" On investigating, I did found that a lot of our lookups were over 100+ MB, going upto 500+ MB were in the bundle. I proceeded to identify the large lookups and created a replicationBlacklist for them, which I plan to implement on my search head in distsearch.conf. [replicationBlacklist] blklistfiles = /apps/*/lookups/(abc.csv|def.csv|fgh.csv)   My question is, is it good to delete all the .bundle files from $SPLUNK_HOME/var/run directory, after implementing the above mentioned change and then restart Splunkd? Some bundles are almost a year old. What will be the impact of this, or is there anything I should take care of before doing this or is there an alternative? Any opinion/advice will be highly welcomed. Thank you, S  
Hi all, after upgrade to 8.0.5 from 7.2.6 all my users can't send mail using sendemail.py because they don't have access to mail settings: ERROR sendemail:1370 - Could not get email credentials fro... See more...
Hi all, after upgrade to 8.0.5 from 7.2.6 all my users can't send mail using sendemail.py because they don't have access to mail settings: ERROR sendemail:1370 - Could not get email credentials from splunk, using no credentials. Error: [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/services/admin/alert_actions/email I've already checked that list_settings is added to roles. If I add admin_all_objects users can send but I don't want to add that capability to all users. Is there other capability to add other that list_settings to enable user to send mail? Thanks
I have got a query like this   index=* request in (request1, request2, request3) eval  request&& = request1 + request2 Please help. 
Hi, I am stuck at a query problem. So what i need to do is join some events and get the result and for that I am using stats. I can't use join because of the sub-search limitation. Below is my query.... See more...
Hi, I am stuck at a query problem. So what i need to do is join some events and get the result and for that I am using stats. I can't use join because of the sub-search limitation. Below is my query. The common field in the events is id which i am extracting but what I want to do is produce a table based on clientId (like below) but the current query does not give what i require. clientId SuccessCount FailedAttemptCount client1 100 50 client2 250 70 client3 5500 450   The problem is my one event contains clientId (sourcetype=splunk_audit_log event=AUTHN_ATTEMPT clientId=* status=inprogress) and another event contain whether that client was successful during login attempt (source="server.log" "In processCredentials" ) The query which i created is -     index=test (sourcetype=splunk_log event=AUTHN_ATTEMPT clientId=* status=inprogress) OR (source="server.log" "In processPasswordCredential" NOT "not found!") | rex field=_raw "user\=\[\d+\] (?<raw_status>.*)" | rex field=_raw "sessionid\=\"id\:(?<id>[^\"]+)" | eval status_new=case(raw_status="found and success","Success", raw_status="found but failed","Fail") | stats list(status_new) as aa_status by id | eval NumberOfSuccess=mvfilter(match(aa_status, "Success")) | eval NumberOfFail=mvfilter(match(aa_status, "Fail")) | eval SuccessCount = mvcount(NumberOfSuccess) | eval FailedAttemptCount = mvcount(NumberOfFail) | fields - NumberOfSuccess NumberOfFail count aa_status       let me know if someone can help please. Appreciate your support.
Hi, Having a hard time setting up the JRE Installation Path(JAVA_HOME). Please note that to the best of my knowledge, my java path is "C:\ProgramData\Oracle\Java\javapath" and this is a result from ... See more...
Hi, Having a hard time setting up the JRE Installation Path(JAVA_HOME). Please note that to the best of my knowledge, my java path is "C:\ProgramData\Oracle\Java\javapath" and this is a result from running the command "where java" in cmd (windows 10). Splunk version 8.0.5 (local computer, pending trial for production) DB Connect App version 3.3.1 JDK installed: 11.0.8 Error description: depends on how its written  Entering  exactly C:\ProgramData\Oracle\Java\javapath yields the error 'str' object has no attribute 'decode' Entering "C:\ProgramData\Oracle\Java\javapath" with quotation marks yields the error JAVA_HOME path not exist Am I heading the right way? that folder does contain java.exe  
I have syslogs from our load balancer which has 4 servers on it. When one of the servers states changes from UP to DOWN or DOWN to UP it is reported in the syslogs as a string value in an event but ... See more...
I have syslogs from our load balancer which has 4 servers on it. When one of the servers states changes from UP to DOWN or DOWN to UP it is reported in the syslogs as a string value in an event but sometimes a single event from the same time will contain server state changes for multiple servers. OR a single server but BOTH state change to DOWN and state change to UP. my issue is that no matter what search I use it never accurately picks up every state change for every server from any event that has multiple messages in it. Below is a sample of one of my events that has more than one state change: NOTE I want to extract ALL instances of the following message to a single field A Loadbalancer Server Status is changed to DOWN AND/OR A Loadbalancer Server Status is changed to UP   LOG EXAMPLE: Aug 6 03:01:12 NSX-Edge03-0 MsgMgr[2349]: [MDCM]: payload len:770 data:{"systemEvents":[{ "moduleName":"vShield Edge LoadBalancer", "severity":"Informational", "eventCode":"30302", "message":"A Loadbalancer Server Status is changed to DOWN", "timestamp":1596708060, "metaData":{ "listener" : "Carson_MDCM_Servers", "server" : "Server69" } },{ "moduleName":"vShield Edge LoadBalancer", "severity":"Informational", "eventCode":"30301", "message":"A Loadbalancer Server Status is changed to UP", "timestamp":1596708081, "metaData":{ "listener" : "Carson_MDCM_Servers", "server" : "Server69" } },{ "moduleName":"vShield Edge LoadBalancer", "severity":"Informational", "eventCode":"30301", "message":"A Loadbalancer Server Status is changed to UP", "timestamp":1596708082, "metaData":{ "listener" : "WT_MDCM_Servers", "server" : "Server81" } }]}    
Hi there,    I have just started using Splunk and it is quite alien to me. Hope you guys can help me out! I have the following search setup:    User_ID=B123456  | streamstats current=f wi... See more...
Hi there,    I have just started using Splunk and it is quite alien to me. Hope you guys can help me out! I have the following search setup:    User_ID=B123456  | streamstats current=f window=1 last(Agent) as Prev_Agent | eval Agent_Change= if(Agent==Prev_Agent, "True", "False") Table Agent, Agent_Change   Basically, it is evaluating if the value of the field Agent is equal to the previous value for each event of a specific User (User_ID=B123456) Currently, it looks like this:    Agent         |           Agent_Change rgrg1          |           True  rgrg1          |           True  rgrg1          |           False  ytyt4          |           False  rgrg1          |           True  rgrg1          |           True  rgrg1          |           True    I would like to count the total amount of True and False values for multiple Users (User_ID) and display it in a one table.                                            True                  False B123456         |           55          |          76 B654321         |           22          |          82 B567890         |           87          |          99 B098765         |          12           |          33   Hope someone can help me out or at least point me in the right direction. Much appreciated!  Matthew
I am having a problem with what i believe is writing a regex to clean up some events before i report on them in dashboard.  I am pulling specific security events from windows and each event should re... See more...
I am having a problem with what i believe is writing a regex to clean up some events before i report on them in dashboard.  I am pulling specific security events from windows and each event should return a username and a domain.  I am getting those results, but with each, it is also returning a second data item "-".  That is throwing things off/making it look ugly and i havent had much luck ripping it out.  Hoping someone can assist and possibly explain what the solution is doing?  I tried to do an eval replace for the field where "-" is replaced with "" but then none of my events showed up so clearly that was wrong.  A sample event looks like this to help clarify what i am getting:     I basically need to drop the first line from both the "Account" and also "Account_Domain" so that i would only get service. and PF as values.   As always, help is greatly appreciated.
Hi, I'm sending logs from Windows machines to a log group in CloudWatch that sends to Splunk via Lambda function. These logs are arriving in Splunk in the wineventlog sourcetype, but the parse is... See more...
Hi, I'm sending logs from Windows machines to a log group in CloudWatch that sends to Splunk via Lambda function. These logs are arriving in Splunk in the wineventlog sourcetype, but the parse is not correct. In the raw source logs, I can view that the logs come in one line, and differently than the parse understands. Example: [Security] [4776] [Microsoft-Windows-Security-Auditing] [XXXXX] [The computer attempted to validate the credentials for an account. Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon Account: 000000 Source Workstation: CBBBB Error Code: 0x0] I've tried to change the sourcetype, the format to CSV, deleted the line_breaker, but until now it does not work. Does anyone know how I can parse these kinds of logs coming from log groups in AWS CloudWatch? Thank you a lot. 
KVstore is failing on Heavy forwarder. I have generated the new certificate. Also tried to rename file server.pem and restarted the services. Still it is failing. Below is error socket: Bad file des... See more...
KVstore is failing on Heavy forwarder. I have generated the new certificate. Also tried to rename file server.pem and restarted the services. Still it is failing. Below is error socket: Bad file descriptor connect:errno=9 I ran following command:- openssl s_client -connect localhost:8191 Please help me with the same.   Regards, Prathamesh