All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When i click on Sync with ThousandEyes button in User Experience i got the error message Sync with ThousandEyes failed.Please try again by inspecting the page, i found a 500 HTTP error to   "http... See more...
When i click on Sync with ThousandEyes button in User Experience i got the error message Sync with ThousandEyes failed.Please try again by inspecting the page, i found a 500 HTTP error to   "https://cisco-thousandeyes.saas.appdynamics.com/controller/restui/network/fullsync" [HTTP/1.1 500 Internal Server Error 3755ms] I used my AOuth token from ThousandEyes in the Integration settings in AppD and i can receive Alerts in AppD from ThousandEyes, so this error is not related to the token, and it looks more an internal error from AppD server side
I use Splunk UBA 5.3.0 when I try to add data source with splunk direct, raw events it will be error "There was an error processing your request. It has been logged (ID ...)" How to fix it? Splunk ... See more...
I use Splunk UBA 5.3.0 when I try to add data source with splunk direct, raw events it will be error "There was an error processing your request. It has been logged (ID ...)" How to fix it? Splunk Enterprise I use 9.0.0 (Splunk Enterprise and Splunk UBA are fresh install) Thanks for help.
Hi ,  I am trying to find the list of ids that fail from my logs.  Say I have  2023-11-14T10:30:30,118 INFO Operation failed ..... ...... 2023-11-14T10:30:40,118 INFO Operation ID ABCD .......... See more...
Hi ,  I am trying to find the list of ids that fail from my logs.  Say I have  2023-11-14T10:30:30,118 INFO Operation failed ..... ...... 2023-11-14T10:30:40,118 INFO Operation ID ABCD ............. 2023-11-14T10:35:25,118 INFO Operation success ..... ...... 2023-11-14T10:35:30,118 INFO Operation id 1234 ''''''   I am trying to get the information as Time stamp Status ID 2023-11-14T10:30:30 failed ABCD 2023-11-14T10:30:30 Success 1234   I appreciate any help  Thanks   
I want to add a command to my add on, with the aim of passing the splunk spl query results to that command, and then processing it to return the data to splunk's statistical information. there is my... See more...
I want to add a command to my add on, with the aim of passing the splunk spl query results to that command, and then processing it to return the data to splunk's statistical information. there is my spl command:index="test" | stats count by asset | eval to_query=asset | fields to_query | compromiseBut the processing of requests in my command is synchronous, which consumes a lot of time def stream(self, records):     for record in records:         logger.info(records)         to_query = record.get("to_query")         data = self.ti_compromise(to_query)         logger.info(data)         if data:             res = deepcopy(record)             if data[to_query]:                 for ioc in data[to_query]:                     if not ioc["ioc"][2]:                         ioc["ioc"][2] = " "                     res.update({PREFIX + key: value for key, value in ioc.items()})                     yield res             else:                 res.update(EMPTY_RTN)                 yield res     The method of "self.ti_compromise(to_query)" is to request other interfaces.   Can I modify the above method to concurrent processing on Splunk? If possible, which plan would be better。 Also, can the statistical information of Splunk receive list types, such as:   [ { "alert_name": "aaaaaaaaaaaa", "campaign": "", "confidence": "", "current_status": "", }, { "alert_name": "bbbbbbbbbbbb", "campaign": "", "confidence": "", "current_status": "", } ]        
We use splunk for data analysing and monitoring. We have the Service Now add in to collect CMDB data. It goes back and collects all the data then only collects new info on changes.  Therefore if we h... See more...
We use splunk for data analysing and monitoring. We have the Service Now add in to collect CMDB data. It goes back and collects all the data then only collects new info on changes.  Therefore if we have any logs at any point being set from hot/cold to cold/frozen it will remove the data points we require. The add-on is not setup to grab all the data again. This means we cannot lose any of that data otherwise the results wil be incomplete. I would like to make it so that the data never goes from hot/cold cold/frozen or have some input on how we can best make this scenario work. 
Hi All - Pretty new to Splunk and having an issue sorting/parsing data from our syslog server. We have many rhel7 linux hosts all sending their logs to one server where they get aggregated. This work... See more...
Hi All - Pretty new to Splunk and having an issue sorting/parsing data from our syslog server. We have many rhel7 linux hosts all sending their logs to one server where they get aggregated. This works fine. I can go into /var/log/secure, messages, etc. and see entries from all the hosts we have. We are running a splunkforwarder on this host with the hopes that it would be forwarding all the data to splunk as it hits the this rhel7 log aggregator. We just have a single head/indexer, and if I run a query "index="*" I do get quite a bit of results, BUT it only shows 2 hosts, the splunk instance and the rhel7 system that we are aggregating the logs on. If I change the search to "index="*" hostname"  with the hostname being one of the rhel hosts, I can find the entries specific to that host. I hope this makes sense? So somehow I need to tell Splunk about these hosts so they are recognized as separate hosts. What can I do to make this work? Thank you all in advance!  
I have installed Splunk forwarder 9.1.1 on a linux server, but the user and group splunk was unable to be created from the rpm installation. I thought that could have fixed the issue as to why i kept... See more...
I have installed Splunk forwarder 9.1.1 on a linux server, but the user and group splunk was unable to be created from the rpm installation. I thought that could have fixed the issue as to why i kept getting an inactive forward-server, but I ended up getting a new error. when i try to restart splunk forwarder, i get the following error: splunkd is not running. "failed splunkd.pid doesn't exist" and when i try to have splunk forwarder list the forward-server, I get the following error 3 times: 'tcp_conn_open_afux ossocket_connect failed with no such file or directory' it still lists my server as an inactive one despite having another splunk forwarder linux host properly connecting to splunk enterprise via ssl connection. I have also made sure that the listening port (9997) is listened to by splunk. its the same port used by the other linux host to forward logs to
Hello, I have a use case where I have a bunch of email alerts that I need to determine the system name for. Examples,  lets say i have the alerts: 1. File system alert on AAA 2. File system aler... See more...
Hello, I have a use case where I have a bunch of email alerts that I need to determine the system name for. Examples,  lets say i have the alerts: 1. File system alert on AAA 2. File system alert on server servernameaaaendservername 3. File system alert on server BBB I have the list of these system names in a lookup table (Around 100 unique names), so adding 100 lines of field_name LIKE "%systemname1%","systemname1" doesn't seem efficient. Is there a way to use the conditional statement with the lookup table to match the statments? Trying to get the below output by using the system names found in the lookup table If systemname is found in the lookup table that matches on what is found in the alert, output systemname Alert Name || System Name File system alert on AAA || AAA File system alert on server servernameaaaendservername || AAA File system alert on server BBB || BBB
I need to extract a string from a message body,  and make a new field for it.   <Junk_Message> #body | Thing1 | Stuff2  | meh4 | so on 1 | extra stuff3 | Blah4 </Junk_Message> I just need the tex... See more...
I need to extract a string from a message body,  and make a new field for it.   <Junk_Message> #body | Thing1 | Stuff2  | meh4 | so on 1 | extra stuff3 | Blah4 </Junk_Message> I just need the text that start with #body and end with Blah4. To make things more fun everything after #body generates randomly.
Here is what I am attempting to write SPL to show.  I will have users logged into several hosts all using a web application.  I want to see the last (most recent) activity performed for each user log... See more...
Here is what I am attempting to write SPL to show.  I will have users logged into several hosts all using a web application.  I want to see the last (most recent) activity performed for each user logged in. Here is what I have so far:  index=anIndex sourcetype=aSourcetype | rex field=_raw "^(?:[^,\n]*,){2}(?P<aLoginID>[^,]+)" | rex field=_raw "^\w+\s+\d+_\w+_\w+\s+:\s+\w+\.\w+\.\w+\.\w+\.\w+\.\w+\.\w+\.\w+,(?P<anAction>\w+)" | search aLoginID!=null | stats max(_time) AS lastAttempt BY host aLoginID | eval aTime = strftime(lastAttempt, "%Y-%m-%d %H:%M:%S %p ") | sort -aTime | table host aLoginID aTime | rename host AS "Host", aLoginID AS "User ID", aTime AS "User Last Activity Time" I am getting my data as expected by host aLoginID but want to only see the most recent anAction ? When I add in my BY clause host aLoginID anAction I start seeing the userID repeated in my results as I would expect as each anAction "name" is different but I am only seeing one row for each anAction name. I think I am on the right 'path' but I want to only see 1 row for each user not 1 row for each userID & action ?
I am trying to generate three reports with stats. The first is where jedi and sith have matching columns. The third is where jedi and sith do not match. Example: index=jedi | table saber_color, J... See more...
I am trying to generate three reports with stats. The first is where jedi and sith have matching columns. The third is where jedi and sith do not match. Example: index=jedi | table saber_color, Jname, strengths index-=sith | table saber_color, Sname, strengths I need to list where Jname=Sname The third one is where the Jname!=Sname  The caveat is I cannot use the join for this query. Any good ideas?    
Hello,  I have a system log which contains different DNS error messages (in the 'Message' field) and I am looking for an easy way to provide a short, meaningful description for those messages, eithe... See more...
Hello,  I have a system log which contains different DNS error messages (in the 'Message' field) and I am looking for an easy way to provide a short, meaningful description for those messages, either by adding a new field representing each unique DNS error message, or by adding text to the Message field. Here's an example; one event contains the following :  Message="DNS name resolution failure (sos.epdg.epc.mnc720.mcc302.pub.3gppnetwork.org)" This error is related to WiFi calling, so I would like to associate a description, or tag to that specific message, e.g. "WiFi calling". Thoughts?
I am using splunk 8.2.12 and am trying to generate a pdf via an existing alert action using splunk api calls. The action was originally developed for automated ticketing within another app when a spl... See more...
I am using splunk 8.2.12 and am trying to generate a pdf via an existing alert action using splunk api calls. The action was originally developed for automated ticketing within another app when a splunk alert is triggered. The end goal is to be able to upload the pdf of  search results based on the alert to the ticket in an automated way. below is the current state of the code:     def create_pdf_for_ticket(payload, output_file): # Extract relevant information from the payload ticket_id = payload.get('sid') index = payload.get('result', {}).get('index') sourcetype = payload.get('result', {}).get('sourcetype') # Construct the search query based on the extracted information search_query = f'search index={index} sourcetype={sourcetype} sid={ticket_id}' # Make the API request to execute the search and get the results search_payload = { 'search': search_query, 'output_mode': 'json', } search_response = requests.get('http://localhost:8089/services/search/jobs/export', params=search_payload, headers=post_headers) # Check if the search request was successful if search_response.status_code == 200: # Save the search results to a file with open(output_file, 'wb') as pdf_file: pdf_file.write(search_response.content) print(f"PDF created successfully at: {output_file}") else: print(f"Error creating PDF: {search_response.status_code} - {search_response.text}") def main(): ***** # Create PDF for the ticket output_file = os.environ['SPLUNK_HOME'] + '/etc/apps/Splunk_Ivanti/local/ticket.pdf' create_pdf_for_ticket(payload, output_file) *****    
I'm trying to corral a string into new field and value and having trouble. I've used eval / split / mvexpand.... The string looks like this. Its actually a field in an event: field_id=/key1/value1/... See more...
I'm trying to corral a string into new field and value and having trouble. I've used eval / split / mvexpand.... The string looks like this. Its actually a field in an event: field_id=/key1/value1/key2/value2/key3/value3/key4/value4 The end goal is to have new fields. Like: field_key1=value1 filed_key2=value2 So i can now search, for example, if field_key1='the value of something"      
I have this props.conf TIME is almost 6hrs off from the event time. Below is my props. [app_log] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+)\d+\-\d+\-\d+\s\d+\:\d+\:\d+\w NO_BINARY_CHECK=true SHOULD_LI... See more...
I have this props.conf TIME is almost 6hrs off from the event time. Below is my props. [app_log] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+)\d+\-\d+\-\d+\s\d+\:\d+\:\d+\w NO_BINARY_CHECK=true SHOULD_LINEMERGE=false disabled=false TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^ TZ=US/Central   Sample log:-   This is event time which is ingesting fine "2023-11-14 10:59:58Z" 2023-11-14 10:59:58Z stevelog Closed Successfully 2023-11-14 10:59:58Z stevelog_close 2023-11-14 10:59:58Z Resetting CWD back from C:\WINDOWS\SysWOW64\inetsrv 2023-11-14 10:59:58Z Resetting CWD complete, back too C:\WINDOWS\SysWOW64\inetsrv 2023-11-14 10:59:58Z steveEngineMain Thread ====================> END   The actual TIME is 6hrs how than event time. Please find the attached screen and request you to let me know what the time difference.      
I'm creating an alert to notify oracle database administrators  when a  db_connect connection has failed. I have created the query to return the name of the failed connection using the splunk _intern... See more...
I'm creating an alert to notify oracle database administrators  when a  db_connect connection has failed. I have created the query to return the name of the failed connection using the splunk _internal logs. However, I would like to include the hostname and default database that are defined in the connection.  I have not been able to locate logs with the connection host and default database using the connection name as the search criteria. Is there a REST or CURL command available that retrieves the host and default database (using the connection name as input) that I can  use to join with my  completed query that retrieves failed connections? Thanks In Advance.  
Is the app (Cisco Secure eStreamer Client Add-On[https://splunkbase.splunk.com/app/3662]) even usable on splunkcloud? I can install it from the "browse more apps" page in the cloud app management are... See more...
Is the app (Cisco Secure eStreamer Client Add-On[https://splunkbase.splunk.com/app/3662]) even usable on splunkcloud? I can install it from the "browse more apps" page in the cloud app management area, but it seems i will not be able to set it up or use it, as 1) it requires you to edit a config file on disk; 2) it writes the data it retreives from Cisco to a local disk; 3) it is not possible to create a disk monitor in splunkcloud.  Only real option seems to be to use a heavy forwarder. Any suggestions?
Hi! Faced with a very specific problem. We use splunk enterprise 7.3.0. We have ru_RU written in the address bar instead of en-US. In the file /opt/splunk/etc/system/local/times.conf, we changed th... See more...
Hi! Faced with a very specific problem. We use splunk enterprise 7.3.0. We have ru_RU written in the address bar instead of en-US. In the file /opt/splunk/etc/system/local/times.conf, we changed the display language of the time input to Russian. When the Date & Time Range item is selected in the time input and the period is set in it by the Between button, the data is applied, but the input itself disappears from the dashboard. An error appears in the console: moment().splunkFormat() does not support the locale ru. If you use en_US instead of ru_RU in the address bar, the error does not occur, but it does not suit us. I tried adding the file ru.js to the locale folder, then splunk stops working. Please tell me how you can fix this error. Thanks!
Hello.  I have logs which contains field "matching" which is a String type. This field contains this kind of information: [firstName, lastName, mobileNumber, town, ipAddress, dateOfBirth, emailAddr... See more...
Hello.  I have logs which contains field "matching" which is a String type. This field contains this kind of information: [firstName, lastName, mobileNumber, town, ipAddress, dateOfBirth, emailAddress, countryCode, fullAddress, postCode, etc]. What I want to do is to compose a query that will return count of a specific search, such as [mobileNumber, countryCode] and display only the fields that contain the above words. I tried this query: index="source*" | where matching LIKE "%mobileNumber%" AND matchingLIKE "%countryCode%" | stats count by matching | table count matching But the answer returns all the possible variants that also contains [mobileNumber, countryCode]. What I want is a count only for all this results   Also I want to create a table with all specific searches I do. I know how to use append, but result is like a stairs, what other solution can be used? Than you!  
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = Eve... See more...
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = EventCode="5158" To a single blacklist with multiple eventcodes. We have tried: blacklist3 = EventCode=5145,5156,4658,4690,5158 and blacklist3 = EventCode="5145" OR "5156" OR "4658" OR "4690" OR "5158" And none of these are applying and blocking out the event codes.    Any recommendations on how to get this to work?