All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am a new Splunk user and this is my first post on the community forum.  If I am not following guidelines please let me know.  I am getting an error for the last line of my search, what is the i... See more...
Hi, I am a new Splunk user and this is my first post on the community forum.  If I am not following guidelines please let me know.  I am getting an error for the last line of my search, what is the issue?  index=web | eval hash=md5(file) | stats count by file, hash | sort - count | eval bad_hash=case((hash==7bd51c850d0aa1df0a4ad7073aeaadf7), "malicious_file")
I have the following dropdowns, now if I select test1 and then test11 in the second dropdown, when I change the first one to be test2, the second dropdown holds the old value and shows test11 althoug... See more...
I have the following dropdowns, now if I select test1 and then test11 in the second dropdown, when I change the first one to be test2, the second dropdown holds the old value and shows test11 although that's not a valid option.     <form version="1.1" theme="dark"> <fieldset> <input type="dropdown" token="test"> <label>Test</label> <choice value="*">All</choice> <choice value="test1">test1</choice> <choice value="test2">test2</choice> </input> <input type="dropdown" token="dependsontest" searchWhenChanged="false"> <label>DependsOnTest</label> <fieldForLabel>$test$</fieldForLabel> <fieldForValue>$test$</fieldForValue> <choice value="*">All</choice> <search> <query>| makeresults | fields - _time | eval test1="test11,test12",test2="test21,test22" | fields $test$ | makemv $test$ delim="," | mvexpand $test$</query> </search> </input> </fieldset> </form>     I tried the <selectFirstChoice>true</selectFirstChoice> and default values but those didn't work. How can I go about this ? EDIT: I tried using the "change" and "condition" and found out that the "dependsontest" token gets updated but the dropdown UI doesn't.
I am using Python SDK to run Splunk queries at 10 minute interval to collect data for my application. I have nearly 300 queries that I need to run every 10 mins. I have 4 FID to run these 300 queries... See more...
I am using Python SDK to run Splunk queries at 10 minute interval to collect data for my application. I have nearly 300 queries that I need to run every 10 mins. I have 4 FID to run these 300 queries, so roughly 75 queries for one FID. And I am using ProcessPoolExecutor in Python to only execute 20 at a time so there is no concurrent limit reached issue. What I am observing is I get the results sometimes and sometimes I get no data from Splunk but the connection to Splunk was successful and the query gets completed with no errors. Am I reaching any limits here?     splunkResultsReaderParameters={ "earliest_time": "-10m", "latest_time": "now" } splunkReader="ResultsReader" oneshotsearch_results = splunkService.jobs.oneshot(query, **splunkParams) reader = results.ResultsReader(oneshotsearch_results)    
i am running Squid 5.2 and having an issue adding the splunk_recommended_squid log format to my squid configuration.  Pulled the log format right out of the splunk documentation.  i'll paste it at th... See more...
i am running Squid 5.2 and having an issue adding the splunk_recommended_squid log format to my squid configuration.  Pulled the log format right out of the splunk documentation.  i'll paste it at the end of this message.  When i try and start squid with that log format, i get an error: " FATAL: Bungled /etc/squid/squid.conf line 11: logformat splunk_squid %ts.%03tu logformat=splunk_recommended_squid duration=%tr src_ip=%>a src_port=%>p dest_ip=%<a dest_port=%<p user_ident="%[ui" user="%[un" local_time=[%tl] http_method=%rm request_method_from_client=%<rm request_method_to_server=%>rm url="%ru" http_referrer="%{Referer}>h" http_user_agent="%{User-Agent}>h" status=%>Hs vendor_action=%Ss dest_status=%Sh total_time_milliseconds=%<tt http_content_type="%mt" bytes=%st bytes_in=%>st bytes_out=%<st sni="%ssl::>sni"   I haven't been able to find anything solid to help out with this.  has anyone else experienced this?   Thanks -Rob  
I have a splunk log as :     Client Map Details : {A=123, B=245, C=456}     The Map can contain more values apart from these 3, or less values, may be 0 to 10 enteries. I want to get sum of al... See more...
I have a splunk log as :     Client Map Details : {A=123, B=245, C=456}     The Map can contain more values apart from these 3, or less values, may be 0 to 10 enteries. I want to get sum of all the values of map and plot in graph, for eg, for above 123+245+456=X, then I need to plot X on graph.  I am able to get the multivalue field as:     index=temp sourcetype="xyz" "Client Map Details : " | rex field=_raw "Client Map Details \{(?<map>[A-Z_0-9= ,]+)\}" | eval temp=split(map,",")   Output from above is  A=123 B=245 C=456   Now how can I iterate over each value from temp and then split by "=" and get value of each? Or is there a better way to do this? Also how do i plot graph for this?  
Hello,  I need to generate a 1000+ records (5-10 fields) fake PII. What Best Practices, SPL, process have you designed to create via | makeresults or lookups/kvstores? Thanks and God bless, Gen... See more...
Hello,  I need to generate a 1000+ records (5-10 fields) fake PII. What Best Practices, SPL, process have you designed to create via | makeresults or lookups/kvstores? Thanks and God bless, Genesius   
I am trying to create a derived metric using |Custom Metrics|Linux Monitor|cpu|CPU (Cores) Logical.   I keep getting the error: WARN NumberUtils-Linux Monitor - Unable to validate the value as a val... See more...
I am trying to create a derived metric using |Custom Metrics|Linux Monitor|cpu|CPU (Cores) Logical.   I keep getting the error: WARN NumberUtils-Linux Monitor - Unable to validate the value as a valid number java.lang.NumberFormatException: For input string: "Cores" Anyone know how to use that metric in a derived metric?  I believe it is trying to use "Cores" as its own metric.  I tried using escape characters with no luck.
Hello, I'm having trouble updating tier, application, and node names for multiple servers hosting app & machine agents.  After making the configuration changes and restarting all Java Agent instances... See more...
Hello, I'm having trouble updating tier, application, and node names for multiple servers hosting app & machine agents.  After making the configuration changes and restarting all Java Agent instances/services, we receive the errors shown below in the logs. Could anyone help interpret what the logs indicate?  Has anyone seen this issue or know what the problem is?  Please let me know if additional details would be helpful. [AD Agent init] Sun Dec 11 01:26:17 EST 2022[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [unknown] [AD Agent init] Sun Dec 11 01:26:17 EST 2022[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running [AD Agent init] Sun Dec 11 01:26:17 EST 2022[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/app/appdynamics/appagent/ver22.3.0.33637] [AD Agent init] Sun Dec 11 01:26:17 EST 2022[INFO]: AgentInstallManager - Agent node directory set to [unknown] 2022-12-11 01:26:17,834 AD Agent init ERROR Cannot access RandomAccessFile java.io.IOException: Could not create directory /opt/app/appdynamics/appagent/ver22.3.0.33637/logs/unknown java.io.IOException: Could not create directory /opt/app/appdynamics/appagent/ver22.3.0.33637/logs/unknown                 at com.appdynamics.appagent/org.apache.logging.log4j.core.util.FileUtils.mkdir(FileUtils.java:120)                 at com.appdynamics.appagent/org.apache.logging.log4j.core.util.FileUtils.makeParentDirs(FileUtils.java:137)                 at com.appdynamics.appagent/org.apache.logging.log4j.core.appender.rolling.RollingRandomAccessFile
I want to write the rex command for the following regex and give it a new field where the findings will be dumped into Call the new field "sql-tautology" | index=webserverlogs | regex "(?i:([\s'... See more...
I want to write the rex command for the following regex and give it a new field where the findings will be dumped into Call the new field "sql-tautology" | index=webserverlogs | regex "(?i:([\s'\"`´’‘\(\)]*?)\b([\d\w]++)([\s'\"`´’‘\(\)]*?)(?:(?:=|<=>|r?like|sounds\s+like|regexp)([\s'\"`´’‘\(\)]*?)\2\b|(?:!=|<=|>=|<>|<|>|\^|is\s+not|not\s+like|not\s+regexp)([\s'\"`´’‘\(\)]*?)(?!\2)([\d\w]+)\b))" | rex   
EventAgentLogin ==================   2022-12-14 06:39:03.875 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventA... See more...
EventAgentLogin ==================   2022-12-14 06:39:03.875 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentLogin' (73) attributes: AttributeReasons [bstr] = KVList:               'referrer' [str] = "https://aaa.test.com"               'clientApp' [str] = "asdfsfdf"               'location' [str] = "server" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0077MP"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744434548 TimeStamp:               AttributeTimeinSecs [int] = 1671021543               AttributeTimeinuSecs [int] = 882837'   -------------------------------------------------------------------------------------   agent ready state" =======================   2022-12-14 07:59:12.764 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentReady' (75) attributes: AttributeReasons [bstr] = KVList:               'site' [str] = "CTC" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744780812 TimeStamp:               AttributeTimeinSecs [int] = 1671026352               AttributeTimeinuSecs [int] = 766178'   -------------------------------------------------------- EventAgentNotReady ================== 2022-12-14 08:01:31.602 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList: AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744808316 TimeStamp: ---------------------------------------------------------- Training ==========   2022-12-14 08:02:47.211 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1' received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList:               'ReasonCode' [str] = "Training" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744821504 TimeStamp:               AttributeTimeinSecs [int] = 1671026567               AttributeTimeinuSecs [int] = 209306'   ------------------------------------------------------------------------- "EventAgentNotReady" AND "Break" ====================================   2022-12-14 08:04:34.025 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1'  received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList:               'ReasonCode' [str] = "Break" AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 0 [Unknown] AttributeEventSequenceNumber [long] = 744838251 TimeStamp:               AttributeTimeinSecs [int] = 1671026674               AttributeTimeinuSecs [int] = 24284'   ---------------------------------------------------------------------------------- AfterCallWork ===============   2022-12-14 08:07:31.310 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1'  received message ''EventAgentNotReady' (76) attributes: AttributeReasons [bstr] = KVList: AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'ReasonCode' [str] = "ManualSetACWPeriod"               'WrapUpTime' [str] = "untimed"               'geo-location-agent' [str] = "CTC" AttributeAgentWorkMode [int] = 3 [AfterCallWork] AttributeEventSequenceNumber [long] = 744864731 TimeStamp:               AttributeTimeinSecs [int] = 1671026851               AttributeTimeinuSecs [int] = 319075'   -------------------------------------------------------------------------------------------- EventAgentLogout =====================   2022-12-14 08:10:09.778 TRACE 12632 --- [New I/O client worker #1-6] c.i.e.g.workflows.ServerEventFactory     : Endpoint 'SERVER1'  received message ''EventAgentLogout' (74) attributes: AttributeThisDN [str] = "1990613829" AttributeAgentID [str] = "34434343" AttributeExtensions [bstr] = KVList:               'AgentSessionID' [str] = "027DK7N0IC9H5D7NF885C2LAES0078C6"               'geo-location-agent' [str] = "CTC" AttributeEventSequenceNumber [long] = 744889386 TimeStamp:               AttributeTimeinSecs [int] = 1671027009               AttributeTimeinuSecs [int] = 779569'     I have different event log format but I am trying to extract the common fields (highlighted and underlined fields). Since the  log format is different I am not sure how to extract the values using single rex or regex query. The field I am looking for is: Endpoint = SERVER1 received message = EventAgentLogout AttributeThisDN = 1990613829 AttributeAgentID= 34434343 AttributeAgentWorkMode = AfterCallWork ReasonCode = Break   There are few fields like "Reasoncode" does not present in few logs events. But the command field is "AttributeThisDN" which will be unique in all the event          
Hi Community! We recently installed the Jira Service Desk simple AddOn version 2013, added an account and we can connect to our Jira ServiceDesk. We see the projects and all the other cool stuff bu... See more...
Hi Community! We recently installed the Jira Service Desk simple AddOn version 2013, added an account and we can connect to our Jira ServiceDesk. We see the projects and all the other cool stuff but we do not manage to get a Support Request created. The error message is:       2022-12-13 15:13:04,404 ERROR pid=30082 tid=MainThread file=cim_actions.py:message:292 | sendmodaction - signature="JIRA Service Desk ticket creation has failed!. url=https://a.b.c.com/rest/api/latest/issue, data={'fields': {'project': {'key': 'SPLUN'}, 'summary': 'Splunk Alert: Splunk test', 'description': 'The alert condition for whatsoever was triggered.', 'issuetype': {'name': 'Service Request'}, 'priority': {'name': 'Lowest'}}}, HTTP Error=400, content={"errorMessages":[],"errors":{"summary":"Field 'summary' cannot be set. It is not on the appropriate screen, or unknown.","description":"Field 'description' cannot be set. It is not on the appropriate screen, or unknown.","priority":"Field 'priority' cannot be set. It is not on the appropriate screen, or unknown."}}" action_name="jira_service_desk" search_name="Jira Test" sid="scheduler_cGV0ZXIuenVtYnJpbmtAaW5na2EuaWtlYS5jb20__search__RMD5c624f9724a09e6de_at_1670944380_2203_4B6BB912-8E18-4D05-9295-5A80D62CEEC7" rid="0" app="search" user="a.b.c.d" action_mode="saved" action_status="failure"       I assume the AddOn is reaching out to the wrong rest endpoint. When using curl we are advised to connect to: https://a.b.c.com/rest/servicedeskapi/request Can this be configured? Or do I just missunderstand the whole thing. If so, guidance would be appreciated. I'm kind of lost @guilmxm  Peter
hi all, i have some events with a field called RUNTIME for each job. how can i get the average value of RUNTIME for each of the job and result will be on new field  
Dear community! I'm monitoring a folder with CSV files, and in each file there are two tables in the same sheet. Is there any way to define each section into 2 sourcetypes or sources? Thank you... See more...
Dear community! I'm monitoring a folder with CSV files, and in each file there are two tables in the same sheet. Is there any way to define each section into 2 sourcetypes or sources? Thank you for your help  
I have an error while trying to animate an embedded SVG in one of my XML dashboard. I noticed that the animate tag included in a <line> or <rect> root tags is systematically removed from my XML when ... See more...
I have an error while trying to animate an embedded SVG in one of my XML dashboard. I noticed that the animate tag included in a <line> or <rect> root tags is systematically removed from my XML when rendered in the dashboard's HTML (the output). That problem gives me error while trying to parse the <animate> tag with a custom JavaScript, as they are missing from the DOM.      <line id="id_1" stroke="rgb(0,220,228)" stroke-width="3" visibility="visible" x1="404.0" x2="833.0" y1="1255.0" y2="1255.0"> <title>id_1</title> <desc>description of id_1</desc> <metadata> <route/> </metadata> <animate attributeName="stroke" attributeType="CSS" dur="1s" from="rgb(0,220,228)" id="id_1" repeatCount="indefinite" to="white"/> </line>     I enclosed with this post a sample of the SVG embedded in my dashboard. While rendered as HTML, it looks like this:     <line id="id_1" stroke="rgb(0,220,228)" stroke-width="3" visibility="visible" x1="404.0" x2="833.0" y1="1255.0" y2="1255.0"> <title>id_1</title> <desc>description of id_1</desc> <metadata> </metadata> </line>     It used to work on a Splunk 7, but somehow it doesn't work on Splunk 8 and 9. I wanted to know what might cause this error ? And also, can someone provide me docs on how the Splunk XML dashboard are exported as HTML in order to understand which component might explain this problem ? Are there SVG/XML standards that evolved from Splunk 7 to Splunk 8/9 ?
Hi Team, I am working on web application firewall related use case, I wanna find out top targeted domain on my domain. I just try to work with index=netwaf Example: my domain is example.com,... See more...
Hi Team, I am working on web application firewall related use case, I wanna find out top targeted domain on my domain. I just try to work with index=netwaf Example: my domain is example.com, so there is a bunch of subdomains, So I  just want to find top targeted domains with traffic size  ( if it is malicious would be great ) please I need help quickly.  
Hello, before the upgradation to splunk 9.x we have to move the actual instances of Splunk to new VMs with new OS version and additional resources  (CPU, RAM and disk space [indexers]): cluster S... See more...
Hello, before the upgradation to splunk 9.x we have to move the actual instances of Splunk to new VMs with new OS version and additional resources  (CPU, RAM and disk space [indexers]): cluster SH: 3 nodes cluster indexers: 16 peer (2 sites). Migration for nodes like SHs, CM and Deploy are pretty clear, I have some doubt about the peers node. Probably we do the migration a peer at time, is it better to put peer offline or stop them ? in case is preferable to put them offline, is it possible to extend the restart period without time limit, for example 9 hours or more ? This is due to syncing the file system where indexes sits to the new VM. It also not clear if offline method rolls the bucket to warm from hot or must be done manually Thanks,
Hello I found a strange behavior of the join command. Here are two queries: 1) search status=pass | chart dc(id) as pass by name 2) search status=failed | chart dc(id) as failed by name ... See more...
Hello I found a strange behavior of the join command. Here are two queries: 1) search status=pass | chart dc(id) as pass by name 2) search status=failed | chart dc(id) as failed by name then I try to join these queries to get one table: 3) search status=pass | chart dc(id) as pass by name | table name pass | join name [ search status=failed | chart dc(id) as failed by name | table name failed ] | table name pass failed For some reason "failed" column values by the query #2 are different to values from join in the query #3, but "pass" column valuesare the same: query 1 table   query 2 table   name pass name  failed Tom  12 Tom  4 Jerry 13 Jerry 5   query 3 table     name pass failed Tom 12 7 Jerry 13 9
Hi to all,   in the graph below, I need to add symbol "%" in y2 axis and translate axis y1 and y2.   Can someone help me please?
Hi Team,  Current i have fields and with this query below, was able to get all fields are in same size. <option name="charting.chart.markerSize">1</option>     But I would like to increase t... See more...
Hi Team,  Current i have fields and with this query below, was able to get all fields are in same size. <option name="charting.chart.markerSize">1</option>     But I would like to increase the size of orange field and decrease the size of blue & green field. how it can be done in splunk? any leads are welcome.    
Hi,   We have a project to implement splunk so that it talks out to splunk cloud, via a proxy server. To do this, i believe we need to configure the heavy forwarder to connect to the proxy using s... See more...
Hi,   We have a project to implement splunk so that it talks out to splunk cloud, via a proxy server. To do this, i believe we need to configure the heavy forwarder to connect to the proxy using socks5, port 1080, as per this article:  https://docs.splunk.com/Documentation/Splunk/9.0.2/Forwarding/ConfigureaforwardertouseaSOCKSproxy I beleive i've done this correctly, i think, and we also think the proxy is configured correctly. Yet we aren't seeing the data flow into splunk.   Am i missing something with the config on the forwarder, or is it really just as that article presents it? Looking in the deployment server, i can see the test endpoints we've added are visible, so all that seems to be working, its now getting this out and into the cloud we need.   All new to me splunk, so trying to work it out on the fly, therefore an pointers in the right direction would be appreciated