All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  When i'm searching the top users who logged into a host, I'm getting event data along with the user when i'm using pipe. ex: sourcetype="hostname" "authentication success" | top limit=50 User... See more...
Hi  When i'm searching the top users who logged into a host, I'm getting event data along with the user when i'm using pipe. ex: sourcetype="hostname" "authentication success" | top limit=50 User   Can someone help with this issue?
Hi guys, I have configured radware DDOS app into splunk, I want gather the total amount of traffic from the DDOS app in splunk ( the traffic seems like an attack ) in GB. the sample query lik... See more...
Hi guys, I have configured radware DDOS app into splunk, I want gather the total amount of traffic from the DDOS app in splunk ( the traffic seems like an attack ) in GB. the sample query like this. index="security" sourcetype=DefensePro action="*" policy=* | 'Top_attack_types(*)' how do I come up with this.
Hi, A customer I am dealing with has a hybrid setup (UF, HF, DS on-prem) and the Rest of Infra in Splunk Cloud. There are  2800+ Universal Forwarders in  a missing status. These were operational, h... See more...
Hi, A customer I am dealing with has a hybrid setup (UF, HF, DS on-prem) and the Rest of Infra in Splunk Cloud. There are  2800+ Universal Forwarders in  a missing status. These were operational, however filtering was not setup correctly, so they blew the 150GB limit on Splunk Cloud. They decided to run an SCCM deployment to delete to the CONF files in UF configuration. Now, a re-install on the agent and trying to apply HF Config is not changing these statuses. Would a rebuild forwarder assets period set to 24 delete all HF with missing status and will these be discovered again?  https://community.splunk.com/t5/Getting-Data-In/How-far-back-can-be-go-when-rebuilding-the-forwarders-assets/m-p/249196 Or - Do we need to do a completed uninstall of UF package in SCCM, then re-deploy 9.0.2 with CONF files. Thanks, Stuart  
Good day, I am working on a Splunk project, end to end from log ingestion to creating searcheads and dashboards. I need sample logs for SalesForce and Cisco Secure Email. Nothing with sensitive inf... See more...
Good day, I am working on a Splunk project, end to end from log ingestion to creating searcheads and dashboards. I need sample logs for SalesForce and Cisco Secure Email. Nothing with sensitive information, something old I can work with. Where can I find sample logs for appliances and other applications for use in Splunk? I learnt there are some great websites out there.  I might need to use specific TA's but will deal with that later, just need to get my hands on sample logs and create a syslog server and all other components.  Thanks!
I am unable to push shcluster bundles post an upgrade to 9.0.2 from 8.2.7. I have also completed the upgrade and migrated the KVstore without error and see the following expected settings: server... See more...
I am unable to push shcluster bundles post an upgrade to 9.0.2 from 8.2.7. I have also completed the upgrade and migrated the KVstore without error and see the following expected settings: serverVersion : 4.2.17 storageEngine : wiredTiger   The error I receive is: "Error in pre-deploy check, uri=https://<HOST_NAME>/services/shcluster/captain/kvstore-upgrade/status, status=502, error=No error" If I look in splunkd.log I get the following error for each attempt. HttpClientRequest [2071959 TcpChannelThread] - Caught exception while parsing HTTP reply: Unexpected character while looking for value: '<' The error from the actual command makes me think that there was an issue with the kvstore-upgrade that is just not showing.
Salut vous allez bien j esper alors j'aimerai avoir des conseils ou des uggestion pour un projet qui porte sur la mise en place d'un NOC pour un reseaux d'operateurs merci Hi you are well i hope ... See more...
Salut vous allez bien j esper alors j'aimerai avoir des conseils ou des uggestion pour un projet qui porte sur la mise en place d'un NOC pour un reseaux d'operateurs merci Hi you are well i hope then i would like to have advice or suggestions for a project that focuses on setting up a NOC for a network of operators Thank you
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the follow... See more...
Hi,  We're preparing to upgrade SE from 8 to 9 and have a question about this requirement: For distributed deployments of any kind, confirm that all machines in the indexing tier satisfy the following conditions:  ... ... They do not run their own saved searches If our indexers are also search heads, would that violate this?
Hello Splunk Experts, Our organization has multiple applications. A work item, such as an order, passes through various applications and the actions performed on this work item are logged. Differen... See more...
Hello Splunk Experts, Our organization has multiple applications. A work item, such as an order, passes through various applications and the actions performed on this work item are logged. Different apps have different log formats. Here's what I am trying to do with my dashboard. When a user enters a work item # in the dashboard input, it will show the "journey" of that work item as it is processed by each app and passed on. I have panels on the dashboard to indicate the log entry of when it was received, processed and the passed on to the next app in the chain. Now, I am trying to get a bit more creative. In addition to the panels on the dashboard, I am planning to have a label on the dashboard with a story-template such as --- "An order with item placed by <username extracted from first or nth search result of app1> with <item # from input> arrived for processing at <time from first or nth search result of app1>. Then it was passed on to app2 at <time from first or nth search result of app 2>.  <if there is any error then> The item encountered error in app2. Error is <error extracted from search result of app2>, etc. Please contact blah blah --- So the idea here is to generate a human-readable "story", i.e. a text generated based on search results of each panel, so that someone looking at the dashboard does not have to examine multiple panels to understand what is going on. They can simply read this "story". I am able to get the resultCount using <progress> and <condition> tags in the dashboard, but do not know how to fetch and examine first or nth search result, or look for some specific text such as error or the time for nth result within the search results displayed in the panel for a particular app. Any hints or specific examples appreciated. Thanks much!
I have an access logs which prints like this server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US HTTP/1.1" 200 350 85 which rex is  | rex field=_raw "(?<SRC>\d+\... See more...
I have an access logs which prints like this server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US HTTP/1.1" 200 350 85 which rex is  | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<uri_path>\S+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" Is there a way to seperate uri into two or 3?  /google/page1/page1a/633243463476/googlep1?sc=RT&lo=en_US  TO  /google /page1/page1a/633243463476/googlep1?sc=RT&lo=en_US  OR /google /page1/page1a/633243463476/googlep1  ?sc=RT&lo=en_US  
Hi, I am a new Splunk user and this is my first post on the community forum.  If I am not following guidelines please let me know.  I am getting an error for the last line of my search, what is the i... See more...
Hi, I am a new Splunk user and this is my first post on the community forum.  If I am not following guidelines please let me know.  I am getting an error for the last line of my search, what is the issue?  index=web | eval hash=md5(file) | stats count by file, hash | sort - count | eval bad_hash=case((hash==7bd51c850d0aa1df0a4ad7073aeaadf7), "malicious_file")
I have the following dropdowns, now if I select test1 and then test11 in the second dropdown, when I change the first one to be test2, the second dropdown holds the old value and shows test11 althoug... See more...
I have the following dropdowns, now if I select test1 and then test11 in the second dropdown, when I change the first one to be test2, the second dropdown holds the old value and shows test11 although that's not a valid option.     <form version="1.1" theme="dark"> <fieldset> <input type="dropdown" token="test"> <label>Test</label> <choice value="*">All</choice> <choice value="test1">test1</choice> <choice value="test2">test2</choice> </input> <input type="dropdown" token="dependsontest" searchWhenChanged="false"> <label>DependsOnTest</label> <fieldForLabel>$test$</fieldForLabel> <fieldForValue>$test$</fieldForValue> <choice value="*">All</choice> <search> <query>| makeresults | fields - _time | eval test1="test11,test12",test2="test21,test22" | fields $test$ | makemv $test$ delim="," | mvexpand $test$</query> </search> </input> </fieldset> </form>     I tried the <selectFirstChoice>true</selectFirstChoice> and default values but those didn't work. How can I go about this ? EDIT: I tried using the "change" and "condition" and found out that the "dependsontest" token gets updated but the dropdown UI doesn't.
I am using Python SDK to run Splunk queries at 10 minute interval to collect data for my application. I have nearly 300 queries that I need to run every 10 mins. I have 4 FID to run these 300 queries... See more...
I am using Python SDK to run Splunk queries at 10 minute interval to collect data for my application. I have nearly 300 queries that I need to run every 10 mins. I have 4 FID to run these 300 queries, so roughly 75 queries for one FID. And I am using ProcessPoolExecutor in Python to only execute 20 at a time so there is no concurrent limit reached issue. What I am observing is I get the results sometimes and sometimes I get no data from Splunk but the connection to Splunk was successful and the query gets completed with no errors. Am I reaching any limits here?     splunkResultsReaderParameters={ "earliest_time": "-10m", "latest_time": "now" } splunkReader="ResultsReader" oneshotsearch_results = splunkService.jobs.oneshot(query, **splunkParams) reader = results.ResultsReader(oneshotsearch_results)    
i am running Squid 5.2 and having an issue adding the splunk_recommended_squid log format to my squid configuration.  Pulled the log format right out of the splunk documentation.  i'll paste it at th... See more...
i am running Squid 5.2 and having an issue adding the splunk_recommended_squid log format to my squid configuration.  Pulled the log format right out of the splunk documentation.  i'll paste it at the end of this message.  When i try and start squid with that log format, i get an error: " FATAL: Bungled /etc/squid/squid.conf line 11: logformat splunk_squid %ts.%03tu logformat=splunk_recommended_squid duration=%tr src_ip=%>a src_port=%>p dest_ip=%<a dest_port=%<p user_ident="%[ui" user="%[un" local_time=[%tl] http_method=%rm request_method_from_client=%<rm request_method_to_server=%>rm url="%ru" http_referrer="%{Referer}>h" http_user_agent="%{User-Agent}>h" status=%>Hs vendor_action=%Ss dest_status=%Sh total_time_milliseconds=%<tt http_content_type="%mt" bytes=%st bytes_in=%>st bytes_out=%<st sni="%ssl::>sni"   I haven't been able to find anything solid to help out with this.  has anyone else experienced this?   Thanks -Rob  
I have a splunk log as :     Client Map Details : {A=123, B=245, C=456}     The Map can contain more values apart from these 3, or less values, may be 0 to 10 enteries. I want to get sum of al... See more...
I have a splunk log as :     Client Map Details : {A=123, B=245, C=456}     The Map can contain more values apart from these 3, or less values, may be 0 to 10 enteries. I want to get sum of all the values of map and plot in graph, for eg, for above 123+245+456=X, then I need to plot X on graph.  I am able to get the multivalue field as:     index=temp sourcetype="xyz" "Client Map Details : " | rex field=_raw "Client Map Details \{(?<map>[A-Z_0-9= ,]+)\}" | eval temp=split(map,",")   Output from above is  A=123 B=245 C=456   Now how can I iterate over each value from temp and then split by "=" and get value of each? Or is there a better way to do this? Also how do i plot graph for this?  
Hello,  I need to generate a 1000+ records (5-10 fields) fake PII. What Best Practices, SPL, process have you designed to create via | makeresults or lookups/kvstores? Thanks and God bless, Gen... See more...
Hello,  I need to generate a 1000+ records (5-10 fields) fake PII. What Best Practices, SPL, process have you designed to create via | makeresults or lookups/kvstores? Thanks and God bless, Genesius   
I am trying to create a derived metric using |Custom Metrics|Linux Monitor|cpu|CPU (Cores) Logical.   I keep getting the error: WARN NumberUtils-Linux Monitor - Unable to validate the value as a val... See more...
I am trying to create a derived metric using |Custom Metrics|Linux Monitor|cpu|CPU (Cores) Logical.   I keep getting the error: WARN NumberUtils-Linux Monitor - Unable to validate the value as a valid number java.lang.NumberFormatException: For input string: "Cores" Anyone know how to use that metric in a derived metric?  I believe it is trying to use "Cores" as its own metric.  I tried using escape characters with no luck.
Hello, I'm having trouble updating tier, application, and node names for multiple servers hosting app & machine agents.  After making the configuration changes and restarting all Java Agent instances... See more...
Hello, I'm having trouble updating tier, application, and node names for multiple servers hosting app & machine agents.  After making the configuration changes and restarting all Java Agent instances/services, we receive the errors shown below in the logs. Could anyone help interpret what the logs indicate?  Has anyone seen this issue or know what the problem is?  Please let me know if additional details would be helpful. [AD Agent init] Sun Dec 11 01:26:17 EST 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [unknown] [AD Agent init] Sun Dec 11 01:26:17 EST 2022[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running [AD Agent init] Sun Dec 11 01:26:17 EST 2022[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/app/appdynamics/appagent/ver22.3.0.33637] [AD Agent init] Sun Dec 11 01:26:17 EST 2022[INFO]: AgentInstallManager - Agent node directory set to [unknown] 2022-12-11 01:26:17,834 AD Agent init ERROR Cannot access RandomAccessFile java.io.IOException: Could not create directory /opt/app/appdynamics/appagent/ver22.3.0.33637/logs/unknown java.io.IOException: Could not create directory /opt/app/appdynamics/appagent/ver22.3.0.33637/logs/unknown                 at com.appdynamics.appagent/org.apache.logging.log4j.core.util.FileUtils.mkdir(FileUtils.java:120)                 at com.appdynamics.appagent/org.apache.logging.log4j.core.util.FileUtils.makeParentDirs(FileUtils.java:137)                 at com.appdynamics.appagent/org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFile
I want to write the rex command for the following regex and give it a new field where the findings will be dumped into Call the new field "sql-tautology" | index=webserverlogs | regex "(?i:([\s'... See more...
I want to write the rex command for the following regex and give it a new field where the findings will be dumped into Call the new field "sql-tautology" | index=webserverlogs | regex "(?i:([\s'\"`´’‘\(\)]*?)\b([\d\w]++)([\s'\"`´’‘\(\)]*?)(?:(?:=|<=>|r?like|sounds\s+like|regexp)([\s'\"`´’‘\(\)]*?)\2\b|(?:!=|<=|>=|<>|<|>|\^|is\s+not|not\s+like|not\s+regexp)([\s'\"`´’‘\(\)]*?)(?!\2)([\d\w]+)\b))" | rex   
EventAgentLogin ==================   2022-12-14 06:39:03.875 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventA... See more...
EventAgentLogin ==================   2022-12-14 06:39:03.875 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentLogin' (73) attributes: AttributeReasons [bstr] = KVList:               'referrer' [str] = "https://aaa.test.com"               'clientApp' [str] = "asdfsfdf"               'location' [str] = "server" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0077MP"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744434548 TimeStamp:               AttributeTimeinSecs [int] = 1671021543               AttributeTimeinuSecs [int] = 882837'   -------------------------------------------------------------------------------------   agent ready state" =======================   2022-12-14 07:59:12.764 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentReady' (75) attributes: AttributeReasons [bstr] = KVList:               'site' [str] = "CTC" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744780812 TimeStamp:               AttributeTimeinSecs [int] = 1671026352               AttributeTimeinuSecs [int] = 766178'   -------------------------------------------------------- EventAgentNotReady ================== 2022-12-14 08:01:31.602 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList: AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744808316 TimeStamp: ---------------------------------------------------------- Training ==========   2022-12-14 08:02:47.211 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList:               'ReasonCode' [str] = "Training" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744821504 TimeStamp:               AttributeTimeinSecs [int] = 1671026567               AttributeTimeinuSecs [int] = 209306'   ------------------------------------------------------------------------- "EventAgentNotReady" AND "Break" ====================================   2022-12-14 08:04:34.025 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1'  received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList:               'ReasonCode' [str] = "Break" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744838251 TimeStamp:               AttributeTimeinSecs [int] = 1671026674               AttributeTimeinuSecs [int] = 24284'   ---------------------------------------------------------------------------------- AfterCallWork ===============   2022-12-14 08:07:31.310 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1'  received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList: AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'ReasonCode' [str] = "ManualSetACWPeriod"               'WrapUpTime' [str] = "untimed"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 3 [AfterCallWork] AttributeEventSequenceNumber [long] = 744864731 TimeStamp:               AttributeTimeinSecs [int] = 1671026851               AttributeTimeinuSecs [int] = 319075'   -------------------------------------------------------------------------------------------- EventAgentLogout =====================   2022-12-14 08:10:09.778 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1'  received message ''EventAgentLogout' (74) attributes: AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeEventSequenceNumber [long] = 744889386 TimeStamp:               AttributeTimeinSecs [int] = 1671027009               AttributeTimeinuSecs [int] = 779569'     I have different event log format but I am trying to extract the common fields (highlighted and underlined fields). Since the  log format is different I am not sure how to extract the values using single rex or regex query. The field I am looking for is: Endpoint = SERVER1 received message = EventAgentLogout AttributeThisDN = 1990613829 AttributeAgentID= 34434343 AttributeAgentWorkMode = AfterCallWork ReasonCode = Break   There are few fields like "Reasoncode" does not present in few logs events. But the command field is "AttributeThisDN" which will be unique in all the event          
Hi Community! We recently installed the Jira Service Desk simple AddOn version 2013, added an account and we can connect to our Jira ServiceDesk. We see the projects and all the other cool stuff bu... See more...
Hi Community! We recently installed the Jira Service Desk simple AddOn version 2013, added an account and we can connect to our Jira ServiceDesk. We see the projects and all the other cool stuff but we do not manage to get a Support Request created. The error message is:       2022-12-13 15:13:04,404 ERROR pid=30082 tid=MainThread file=cim_actions.py:message:292 | sendmodaction - signature="JIRA Service Desk ticket creation has failed!. url=https://a.b.c.com/rest/api/latest/issue, data={'fields': {'project': {'key': 'SPLUN'}, 'summary': 'Splunk Alert: Splunk test', 'description': 'The alert condition for whatsoever was triggered.', 'issuetype': {'name': 'Service Request'}, 'priority': {'name': 'Lowest'}}}, HTTP Error=400, content={"errorMessages":[],"errors":{"summary":"Field 'summary' cannot be set. It is not on the appropriate screen, or unknown.","description":"Field 'description' cannot be set. It is not on the appropriate screen, or unknown.","priority":"Field 'priority' cannot be set. It is not on the appropriate screen, or unknown."}}" action_name="jira_service_desk" search_name="Jira Test" sid="scheduler_cGV0ZXIuenVtYnJpbmtAaW5na2EuaWtlYS5jb20__search__RMD5c624f9724a09e6de_at_1670944380_2203_4B6BB912-8E18-4D05-9295-5A80D62CEEC7" rid="0" app="search" user="a.b.c.d" action_mode="saved" action_status="failure"       I assume the AddOn is reaching out to the wrong rest endpoint. When using curl we are advised to connect to: https://a.b.c.com/rest/servicedeskapi/request Can this be configured? Or do I just missunderstand the whole thing. If so, guidance would be appreciated. I'm kind of lost @guilmxm  Peter