All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello partners I request your kind support as I intend to activate the Linux ESCU correlations, however these do not work well because the datamodels are not complete, I know they are necessary, but... See more...
Hello partners I request your kind support as I intend to activate the Linux ESCU correlations, however these do not work well because the datamodels are not complete, I know they are necessary, but my observation is that the Linux events do not contain all the values ​​necessary to fill the datamodel. So my question to the community is the following: What audit, messages or syslog rules must be active for the correct collection of events?
<input type="dropdown" token="tok_choice" searchWhenChanged="true"> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> ... | stats dc(field2) as fi... See more...
<input type="dropdown" token="tok_choice" searchWhenChanged="true"> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> ... | stats dc(field2) as field2number by host </query> </search> <change> <condition match=" like($tok_choice$,&quot;%&quot;) "> <set token="show_another_panel">show</set> <set token="another_result"> $result.field2number$ </set> </condition> </change> </input> The token for 'show_another_panel' is working just fine but the other token is treating the whole $result.field2number$ as full text including the $.  The drop down is working as expected with fieldForLabel and fieldForValue. I have tried the following. <done> <set token="another_result"> $result.field2number$ </set> </done> This sets the token to the field2number first row.  The value does not update to the row based upon selecting a new host. When selecting a new host, I want the token to update to the corresponding value of the alternate row.  Any suggestions?
Does Splunk Heavy Forwarders support Coretto as opposed to Java
I have events like this : 11/06/2023 12:34:56 ip 1.2.3.4 This is record 1 of 5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user 1 1.0 0.0 2492 604 ? Ss 12:27 0:00 proc01 user 6 0.5 0.0... See more...
I have events like this : 11/06/2023 12:34:56 ip 1.2.3.4 This is record 1 of 5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user 1 1.0 0.0 2492 604 ? Ss 12:27 0:00 proc01 user 6 0.5 0.0 2608 548 ? S 12:27 0:00 proc02 user 19 0.0 0.0 12168 7088 ? S 12:27 0:00 proc03 user 223 0.0 0.1 852056 39300 ? Ssl 12:27 0:00 proc04 user 470 0.0 0.0 7844 6016 pts/0 Ss 12:27 0:00 proc05 user 683 0.0 0.0 7872 3380 pts/0 R+ 12:37 0:00 proc06 11/06/2023 12:34:56 ip: 1.2.3.4 This is record 2 of 5 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user 1 0.0 0.0 2492 604 ? Ss 12:27 0:00 proc07 user 6 9.0 0.0 2608 548 ? S 12:27 0:00 proc08 user 19 6.0 0.0 12168 7088 ? S 12:27 0:00 proc09 user 223 0.0 0.1 852056 39300 ? Ssl 12:27 0:00 proc10 user 470 0.0 0.0 7844 6016 pts/0 Ss 12:27 0:00 proc11 user 683 0.0 0.0 7872 3380 pts/0 R+ 12:37 0:00 proc12 and repeating with different data, but the same structure: record 1 of 18...record 2 of 18...etc. The dates and times are the same for each "subsection" of the ps command. I want to be able to make a graph of each "proc" to show their cpu and memory usage over time. The processes will be in a random order. I have the time line parsed with fields extracted (like the ip), and want the header of the ps command to be field names for the ps data. I'm struggling with this! I tried mvepand and/or max_match=0 but failed. Thanks for any help.
index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n... See more...
index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | eventstats count as failed_count by IONS | where failed_count>=10 | timechart dc(IONS) as IONS span=1d This command does get me the last 24 hours (11/5/23-11/6/23) stats accurately.  However, when I change the time picker to 30 days it shows a very large number for 11/5,11/6 and every day in that 30 day period.  I need the timechart to show only the IONS that have disconnected 10 or more times and show that number daily in a line chart.  I can't seem to get this to work.  Thank you! 
Hello, I have the below code. I'm trying to create a new column that extracts and pivots CareCnts, CoverCnts, NonCoverCnts, etc... (There are more but I have simplified). These |eval are related to t... See more...
Hello, I have the below code. I'm trying to create a new column that extracts and pivots CareCnts, CoverCnts, NonCoverCnts, etc... (There are more but I have simplified). These |eval are related to their corresponding `| evals`.  New Columns = ResourceCounts How would one accomplish this?  index=red msg="*COMPLETED Red*" | spath output=logMessage path=msg | eval Care=spath(json, "Info.Care.elapsedTime") | eval CareCnts=spath(json, "Info.Care.Redcount") | eval Cover=spath(json, "Info.Cover.elapsedTime") | eval CoverCnts=spath(json, "Info.Cover.Redcount") | eval NonCover=spath(json, "Info.NonCover.elapsedTime") | eval NonCoverCnts=spath(json, "Info.NonCover.Redcount") | eval Category = "Red" | table _time, Care, Cover, NonCover, Category | eval SysTime = Category + ":" + _time | fields - Category | untable SysTime Resource CurValue | eval Category = mvindex(split(SysTime, ":"), 0) | eval _time = mvindex(split(SysTime, ":"), 1) | fields - SysTime | table _time, Resource, CurValue, Category Example output: _time Resource CurValue Category *NewColumn 2023-11-06 Care 14.20 Red 10 2023-11-06  Cover 3.4 Red 3 2023-11-06  NonCover 5.5 Red 8  
Hi All, i have 2 indexes having below 2 queries  host,hostname are common for both,  want to add sourceIp using 2nd search  How to join ? query 1 index="index1" \ (puppet-agent OR puppet))... See more...
Hi All, i have 2 indexes having below 2 queries  host,hostname are common for both,  want to add sourceIp using 2nd search  How to join ? query 1 index="index1" \ (puppet-agent OR puppet)) AND *Error* AND "/Stage[" | table host   query2; index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections | table hostname sourceIp | dedup hostname
Please help me correct the command below. It keeps returning all the devices as no even though the app is installed. index="jamf" sourcetype="jssUapiComputer:computerGeneral" | dedup computer_me... See more...
Please help me correct the command below. It keeps returning all the devices as no even though the app is installed. index="jamf" sourcetype="jssUapiComputer:computerGeneral" | dedup computer_meta.serial | rename computerGeneral.lastContactTime AS lastContactTime | eval timestamp = strptime(lastContactTime, "%Y-%m-%dT%H:%M:%S.%3QZ") | eval sixtyDaysAgo = relative_time(now(), "-60d") | where timestamp>sixtyDaysAgo | eval installed=if(computer_meta.serial IN [ search index="jamf" computer_meta.managed="true" sourcetype="jssUapiComputer:app" app.name="VMware CBCloud.app"| fields computer_meta.serial], "Yes", "No") | table computer_meta.name, installed
I want to list what commands in the search language are being used.  I think its possible in the same _audit index and  I want to be able to do is count the number of times each command is used in se... See more...
I want to list what commands in the search language are being used.  I think its possible in the same _audit index and  I want to be able to do is count the number of times each command is used in search Example :  stats used 2 time eval used 5 times  rex used 7 time timechart used 10 time  
Here is what I am attempting to do: I am trying to calculate the distinct count of the 'type' of users that are active. In my log files all of my users have a user name that follows this pattern:  ... See more...
Here is what I am attempting to do: I am trying to calculate the distinct count of the 'type' of users that are active. In my log files all of my users have a user name that follows this pattern:  ABCD.aUserName I am trying to calculate how many distinct users there are that are active for each 'type' where in the above example ABCD is the type. First looking for high level approach ideas and want to dig into it myself to see what I can get to work but I just am not able to wrap my noodle around how to even approach it ? I can extract the 'type' and the 'username' but if I have two extracted fields how do I correlate the two to be able to perform a count of dc(usernames) by type ?
I have a sample data something like below.      { "Year": { "Top30RequesterInOneYear": { "Bob": 22, "Marry": 12 }, "TotalRequestCountInOneYear": { "9": "K", "10": "C" }, "Top10ActionInOneYear": { ... See more...
I have a sample data something like below.      { "Year": { "Top30RequesterInOneYear": { "Bob": 22, "Marry": 12 }, "TotalRequestCountInOneYear": { "9": "K", "10": "C" }, "Top10ActionInOneYear": { "31": "update table", "33": "Display log" } }, "Month": { "Top30RequsterInOneMonth": { "Foo": 3, "Bob": 6 }, "TotalRequestCountInOneMonth": { "1": "K", "5": "C" }, "Top10ActionInOneMonth": { "10": "Display log", "11": "update table" } }, "Week": { "Top30RequesterInOneWeek": { "Bob": 6 }, "TotalRequestCountInOneWeek": { "15": "C" }, "Top10ActionInOneWeek": { "3": "update table", "7": "display reboot" } } }   The output is expected is as below. Can someone please help me on this. Top30RequesterInOneYear Name | Count Bob 22 Marry 12 TotalRequestCountInOneYear Count | Status 9 K 10 C Top10ActionInOneYear Count | Action 31 update table 33 Display log Top30RequsterInOneMonth Name | Count Foo 3 Bob 6 TotalRequestCountInOneMonth Count | Status 1 K 6 C Top10ActionInOneMonth Count | Action 10 display log 11 update table Top30RequesterInOneWeek Name | Count Bob 6 TotalRequestCountInOneWeek Count | Status 15 C Top10ActionInOneWeek Count | Action 3 update table 7 display reboot
Versus opening a ticket with Cisco, I was hoping to see if the community could point me in the correct direction. I'm not particularly skilled but have tried some of the various options that were sim... See more...
Versus opening a ticket with Cisco, I was hoping to see if the community could point me in the correct direction. I'm not particularly skilled but have tried some of the various options that were similar to my problem but with no luck. Splunk version=8.2.3 OS=RHEL 8 Plugin=Cisco Nexus 9k Add-on for Splunk Enterprise from splunkbase. I can get the plugin to connect using http as the connection method. I'm trying to get the https method to work. I can curl to the switch in question using the cert I generated (openssl req -x509 -newkey rsa:4096 -keyout hostkey.pem -out hostcert.pem -sha256 -days 30 -nodes -subj "/C=US.../CN=host") then imported to the switch. ( curl --verbose --cacert hostcert.pem https://host) So at this point I'm confident that the cert and key are installed correctly on the switch and working as expected. The error I'm receiving (truncated): Caused by SSLError(SSLCertVerificationError [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate My guess is that I need to install the certificate somewhere within the /opt/splunk/etc/auth directory , but I'm not sure. I saw some posts that said add to this file or copy into directory, but I can't find the one that works. Any insight is appreciated. Thanks
Hi Team,   We are using DB Connect 3.14.1 and Splunk enterprise version 9.1.1, we have installed DB connect APP, drivers and when we are configuring the database we are getting the error as databas... See more...
Hi Team,   We are using DB Connect 3.14.1 and Splunk enterprise version 9.1.1, we have installed DB connect APP, drivers and when we are configuring the database we are getting the error as database connection is invalid , Login failed for the user, but using same user name and pw we are able to login to Database directly ... can anyone please suggest the answer for this    
Hello, I have dashboard with multiple panels. Each panel returns a table in the dashboard. I would like to view the complete dashboard as one big table. I am so far able to remove the column name fr... See more...
Hello, I have dashboard with multiple panels. Each panel returns a table in the dashboard. I would like to view the complete dashboard as one big table. I am so far able to remove the column name from 2nd panel onwards that gives the view of one columns name on top and below are the column values. The issue now I am facing is the column values are not aligned properly and the data is not readable. I have tried alignment also tried the cell resizing but no luck. Any help will be much appreciated. Above is a screenshot and we have 3 panels. Panel1: column name is displayed. Panel2&3: column name is hidden. we require to align the column values properly in orderly manner. Thanks.
i get this error when upload a csv file with 2 column that included id number and maliciuos domain but when i go to threat intelligence audit i see this error: 2023-11-06 13:15:52,655+0000 WARNING p... See more...
i get this error when upload a csv file with 2 column that included id number and maliciuos domain but when i go to threat intelligence audit i see this error: 2023-11-06 13:15:52,655+0000 WARNING pid=3558172 tid=MainThread file=add_threat_workload.py:_sinkhole_file:151 | status="Sinkholing of local files is not allowed" stanza="8   and  2023-11-06 13:16:22,699+0000 ERROR pid=3558172 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: Splunkd daemon is not responding: ('Error connecting to /servicesNS/nobody/SA-ThreatIntelligence/storage/collections/data/threat_intel_meta2/batch_save: The read operation timed out',) Traceback (most recent call last): File "/Splunk-db/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 567, in simpleRequest serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload) File "/Splunk-db/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1968, in request cachekey, File "/Splunk-db/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1626, in _request conn, request_uri, method, body, headers File "/Splunk-db/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1564, in _conn_request response = conn.getresponse() File "/Splunk-db/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/Splunk-db/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/Splunk-db/splunk/lib/python3.7/http/client.py", line 280, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/Splunk-db/splunk/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/Splunk-db/splunk/lib/python3.7/ssl.py", line 1079, in recv_into return self.read(nbytes, buffer) File "/Splunk-db/splunk/lib/python3.7/ssl.py", line 937, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Splunk-db/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/Splunk-db/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 388, in do_run self.run(stanza) File "/Splunk-db/splunk/etc/apps/SA-ThreatIntelligence/bin/threatlist.py", line 709, in run logger=self.logger File "/Splunk-db/splunk/etc/apps/SA-ThreatIntelligence/bin/threat_utils/utils.py", line 181, in set_threat_intel_meta options File "/Splunk-db/splunk/etc/apps/SA-Utils/lib/SolnCommon/kvstore.py", line 186, in batch_create uri, sessionKey=session_key, jsonargs=json.dumps(records)) File "/Splunk-db/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 579, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /servicesNS/nobody/SA-ThreatIntelligence/storage/collections/data/threat_intel_meta2/batch_save: The read operation timed out',)
Sorry, I am unsure how to describe what I am looking for using Splunk terminology, and I am sure that is why I am having trouble finding the answer. What I am looking for: User    |  Status   |... See more...
Sorry, I am unsure how to describe what I am looking for using Splunk terminology, and I am sure that is why I am having trouble finding the answer. What I am looking for: User    |  Status   | coun --------------------------------- Mike   |   True       |    2             |   False      |    1 -------------------------------- Loagn |  True      |    4              |   False    |    2   So far my search looks like this: index=logs EventType="logon" | stats values(Status) as Status count by User It is almost there, but in the count column, it combines the count for True and False and only gives a single number.
Hello! I have run a search which results in displaying a table. In this table, I would like to check if a combination of values between two fields exists, and, if so, return "Yes." I have done this i... See more...
Hello! I have run a search which results in displaying a table. In this table, I would like to check if a combination of values between two fields exists, and, if so, return "Yes." I have done this in PowerBI using the following command, but I am unsure how to do it in SPL. VAR _SEL = SELECTCOLUMNS('table1', "code1", [code1]) RETURN IF ('table1'[code2] IN _SEL, "Yes", "No")   An example initial table is below: id, code1, code2 1, ab, cd 2, cd, de 3, ab, hi 4, cd, ab  5, jk, cd 6, hi, jk 7, jk, hi The result I am looking for is that it will find that the combination of ab+cd and hi+jk exists in both directions (code1, code2 and code2, code1).  id, code1, code2, result 1, ab, cd, yes 2, cd, de, no 3, ab, hi, no 4, cd, ab, yes  5, jk, cd, no 6, hi, jk, yes 7, jk, hi, yes Thank you for your help!
Hello, We have a splunk instance where we have configured security related logs. There are hundreds of indexes created on the instance and now we are planning to disable indexes that are no longer a... See more...
Hello, We have a splunk instance where we have configured security related logs. There are hundreds of indexes created on the instance and now we are planning to disable indexes that are no longer active. These security logs are now either going to Azure or they are no longer needed so they were stopped by the stakeholders. I am looking for a query that can give me the list of indexes with the most recent event timestamp in respective indexes. with this details plan is to look for the indexes that have event older than 1 month and consider them as migrated/no longer needed.
Hello, I'm facing an issue when trying to create a user or access to savedsearch list. for example When I use the Splunk web interface to create a user, the page remains blank and doesn'... See more...
Hello, I'm facing an issue when trying to create a user or access to savedsearch list. for example When I use the Splunk web interface to create a user, the page remains blank and doesn't display as expected, as shown in the screenshot. Additionally,     I attempted to create a user through the CLI using the "splunk add" command, but I received no response, as indicated in the screenshot.     Have you encountered this problem before? How can I debug it? I'd like to mention that even when I attempt to view saved searches, the page remains blank and doesn't display them. Thank you
What is wrong with the query below, it does not return any value in the timestamp field. The attached image shows a result sample index="jamf" sourcetype="jssUapiComputer:computerGeneral" | dedup co... See more...
What is wrong with the query below, it does not return any value in the timestamp field. The attached image shows a result sample index="jamf" sourcetype="jssUapiComputer:computerGeneral" | dedup computer_meta.serial | eval timestamp = strptime(computerGeneral.lastEnrolledDate, "%Y-%m-%dT%H:%M:%S.%3QZ") | eval sixtyDaysAgo = relative_time(now(), "-60d") | table computer_meta.name, computerGeneral.lastEnrolledDate,timestamp, sixtyDaysAgo