All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

 I've been debugging my inner join query for hours, and that's why I'm here with my first question for this community. We have a csv lookup table with fields "Host_Name", "IP", and others, based on o... See more...
 I've been debugging my inner join query for hours, and that's why I'm here with my first question for this community. We have a csv lookup table with fields "Host_Name", "IP", and others, based on our known hosts that should be reporting. Note: in our Splunk logs, for some hosts the splunk "host" field matches the lookup table "Host_Name" field, and some hosts match the "IP" field. For this reason, when we add a new host, we add 2 rows to the lookup, and place the host name and the IP in both fields of the lookup. (Long story.) Our Lookup ("System_Hosts.csv") looks like this: Host_Name          IP Foo Bar ServerA 123.45.6.7 xyz abc 123.45.6.7 ServerA def ghi ServerB ...and so on       Queries that don't work. (This is a very oversimplified stub of the query, but I'm debugging and brought it down to the smallest code that doesn't function): index=myindex | join type=inner host [|inputlookup System_Hosts.csv | fields Host_Name, IP] | table host (Removing one of the fields from the lookup, just in case I don't understand inner join, and the splunk host has to match both "Host_Name" and "IP" lookup fields to return results): index=myindex | join type=inner host [|inputlookup System_Hosts.csv | fields Host_Name] (Removing "type=inner" optional parameter also doesn't work as expected. Inner is default type.)  Queries that DO work: (To verify logs and hosts exist, and visually match the hosts to lookup table:) index=myindex | table host (To verify lookup is accessible, fields and syntax are accurate:) index=myindex | inputlookup System_Hosts.csv | fields Host_Name, IP | table Host_Name, IP (To make me crazy? Outer join works. But this just returns all hosts from every log.) index=myindex | join type=outer host [|inputlookup System_Hosts.csv | fields Host_Name, IP | table host  So these have been verified: spelling of the lookup spelling of the lookup fields permission to access the lookup syntax of the entire query without the "type=inner" optional argument  From my understanding, when this works, the query will return a table with hosts that match entries in the "Host_Name" OR "IP" fields from the lookup. If I don't understand inner join please tell me, but this is secondary to making inner join work at all, because as you can see above, I try to match only the "Host_Name" field with no success. I'm pulling my hair out! Please help!
I'm currently running Universal Forwarders with version 9.0.0 and 9.0.1.  These UFs were flagged for vulnerabilities associated with (SVD-2023-0809).  The fix would be to upgrade to 9.0.6.  I'm runni... See more...
I'm currently running Universal Forwarders with version 9.0.0 and 9.0.1.  These UFs were flagged for vulnerabilities associated with (SVD-2023-0809).  The fix would be to upgrade to 9.0.6.  I'm running Splunk Enterprise 9.0.5.  I need to know if UF 9.0.6 will be compatible with Splunk ES 9.0.5.
Hi I have developped a dashbord relative to firewall metrics. I need to make my dashboard CIM compliant  Do i need my searches linked to the firewall datamodel with the firewall tstats datamodel (... See more...
Hi I have developped a dashbord relative to firewall metrics. I need to make my dashboard CIM compliant  Do i need my searches linked to the firewall datamodel with the firewall tstats datamodel (tstats where datamodel=....) or do i just need to create tags and eventtypes following the firewall datamodel ? I understand anything so is someone can d'escrime clearly all the steps for having my apps CIM compliant ? Thanks  
I have a Splunk 8.2.9 and I wanted to upgrade to version 9. Can i use the same license after upgrade?
Hello, if you have these errors while loading Incident review and you are behing haproxy server, probably there is a misconfiguration. Possible web errors:   Error loading some filters Updating.... See more...
Hello, if you have these errors while loading Incident review and you are behing haproxy server, probably there is a misconfiguration. Possible web errors:   Error loading some filters Updating... Waiting for data...  
Subject moved to https://community.splunk.com/t5/All-Apps-and-Add-ons/Solution-Splunk-Enterprise-Security-ES-incident-review-not/m-p/694084/thread-id/80869#M80870
Hello, I am using Splunk Enterprise with IT Essentials Work, Windows Addon and Content Pack for Windows Dashboards and Reports. I made all the necessary configurations for Content Pack for Windows Da... See more...
Hello, I am using Splunk Enterprise with IT Essentials Work, Windows Addon and Content Pack for Windows Dashboards and Reports. I made all the necessary configurations for Content Pack for Windows Dashboards and Reports but still I can not see any data in dashboards or the reports.  In eventtypes.conf file in DA-ITSI-CP-windows-dashboards/local folder i made the following changes  [windows_index_windows] definition= index=windows OR index=main [perfmon_index_windows] definition= index=perfmon OR index=itsi_im_metrics [wineventlog_index_windows] definition= index=wineventlog OR index=main   The think the problem starts from the fact that eventtypes are not recognized in searches. For example the search  (eventtype=msad-successful-user-logons OR eventtype=msad-failed-user-logons) returns nothing. In eventttypes.conf the above stanza is: [msad-successful-user-logons] search = eventtype=wineventlog_index_windows eventtype=wineventlog_security EventCode=4624 user!="*$"   If i run the search: index=main EventCode=4624 user!="*$" i get results.   Can someone help me to solve the problem?   Thanks [msad_index_windows] search= index=msad OR index=main
In last couple of days, I have seen few license alerts: This pool has exceeded its configuration poolsize=5GB bytes. A CLE warning has been recorded for all members.  Then I tried to look at the Li... See more...
In last couple of days, I have seen few license alerts: This pool has exceeded its configuration poolsize=5GB bytes. A CLE warning has been recorded for all members.  Then I tried to look at the License Usage report by host and I see couple of issues: 1. My indexer itself it using up most of the license.  2. My indexer is listed twice, one in all capitol (SPLUNK-SERVER1) and 2nd one, regular FQDN (splunk-server1.mydomain). For the 1st issue, checked more and saw /var/log/audit/audit.log is the culprit. What can I do to limit it? For the 2nd issue, I guess, I have spelled out server name differently.  Where can I check other than /opt/splunk/etc/system/local/server.conf? Thanks for your help. 
I am looking for a solution to extract rows containing certain keywords from column "X".  and the remaining data will add to "Total". For example any keyword with SI or SB will be added to count fiel... See more...
I am looking for a solution to extract rows containing certain keywords from column "X".  and the remaining data will add to "Total". For example any keyword with SI or SB will be added to count field "Log" and the other entries excluding empty cell will be added to count field "Total". SNO X 1 400 2 SI-SCRIPT-ERROR 3 (SI-BPR-01) 4 SB-Timeout 5 SB-OrderFound 6 (SB-BPR-02)--(SB-EXL-001) 7 201 8 SI-RAS-200 9 <empty>  
Hello, I have to create a new correlation search looking for failed authentication to VPN. The rule should trigger if there are more than 5 login failures for a source IP and if there are 20 distin... See more...
Hello, I have to create a new correlation search looking for failed authentication to VPN. The rule should trigger if there are more than 5 login failures for a source IP and if there are 20 distinct source IPs with more than 80% login failures, at the moment I wrote the current query: | tstats `summary_fillnull` values(Authentication.index) as index, values(sourcetype) as sourcetype, values(host) as host, values(Authentication.signature_id) as signature_id, count from datamodel=Authentication where NOT [| `authentication_whitelist_generic`] nodename="Authentication.Failed_Authentication" by Authentication.src, Authentication.user, Authentication.dest, Authentication.Error_Code, Authentication.Sub_Status | `drop_dm_object_name("Authentication")` ``` Error_Code and Sub_Status exclusions ``` | `windows_errorcode_substatus_exclusion` Anyone has an idea on how to proceed?   Thanks in advance
Hello, While parsing the logs, I'm trying to extract fields, but at some point, I receive the following message "The extraction failed. If you are extracting multiple fields, try removing one or m... See more...
Hello, While parsing the logs, I'm trying to extract fields, but at some point, I receive the following message "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings."  Even when I try to highlight the fields that it fails to extract, I get the same message. Could this issue be related to the configuration file "limits.conf"  
Hello, Just have a question, about a test that I have in progress. I have one indexer cluster, with two servers. I decided to move, the colddb path, on a san share, mounted on both indexers. ... See more...
Hello, Just have a question, about a test that I have in progress. I have one indexer cluster, with two servers. I decided to move, the colddb path, on a san share, mounted on both indexers. For on test index, I copy all the colddb buckest into the new space on the SAN, then, I change the configuration of my test index, push it on both indexers with the master node, and all seems to be OK. My question is about, how will splunk manage the colddb, it both indexers are pointing on the same share ? There is something that I'm not sure, splunk indexer will manage properly the fact, if indexer1 already save the bucket in the new colddb path ? or I will have buckets in double. Thanks for your clarifications
linux logs only showing epoch time - how to convert epoch time upon ingestion in props/trans ? is there a way or a conversion to convert the epoch time to human readable upon log ingestion?
I tried to make "bubble chart", and information about this chart is this. - x axis : test start time - y axis : test duration time - bubble size : count depend on "x axis" & "y axis" And this is ... See more...
I tried to make "bubble chart", and information about this chart is this. - x axis : test start time - y axis : test duration time - bubble size : count depend on "x axis" & "y axis" And this is my code. | eval start_time_bucket = case( start_time >= 0 AND start_time < 5, "0~5", start_time >= 5 AND start_time < 10, "5~10", start_time >= 10 AND start_time < 15, "10~15", start_time >= 15 AND start_time < 20, "15~20", true(), "20~") | eval duration_bucket=case( duration>=0 AND duration < 0.5, "0~0.5", duration>=0.5 AND duration < 1, "0.5 ~ 1", duration>=1 AND duration < 1.5, "1 ~ 1.5", duration>=1.5 AND duration < 2, "1.5 ~ 2", duration>=2 AND duration < 2.5, "2 ~ 2.5", true(), "2.5 ~" ) | stats count by start_time_bucket, duration_bucket | eval bubble_size = count | table start_time_bucket, duration_bucket, bubble_size | rename start_time_bucket as "Test Start time" duration_bucket as "duration" bubble_size as "Count"  So when the start_time is 12, and duration is 2, this data counted on bubble size at start_time_bucket = "10~15" and duration_bucket ="2~2.5". I have a lot of data on each x & y axis, but It only show the bubble when the start_time_bucket = "0~5" and duration_bucket="0~0.5" like under the picture.   How could I solve this problem? when I show this data on table, it shows very well.
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configurati... See more...
How can I cut some parts of my message prior to index time? I tried to use both SEDCMD and transform on raw messages but I still get the full content each time. Here is my current props configuration: [ETW_SILK_JSON] description = silk etw LINE_BREAKER = ([\r\n]+"event":) SHOULD_LINEMERGE = false CHARSET = UTF-8 TRUNCATE = 0 # TRANSFORMS-cleanjson = strip_event_prefix SEDCMD-strip_event = s/^"event":\{\s*// And my message sample: "event":{{"ProviderGuid":"7dd42a49-5329-4832-8dfd-43d979153a88","YaraMatch":[],"ProviderName":"Microsoft-Windows-Kernel-Network","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","Opcode":11,"OpcodeName":"Datareceived.","TimeStamp":"2024-07-22T14:29:27.6882177+03:00","ThreadID":10008,"ProcessID":1224,"ProcessName":"svchost","PointerSize":8,"EventDataLength":28,"XmlEventData":{"FormattedMessage":"TCPv4: 43 bytes received from 1,721,149,632:15,629 to -23,680,832:14,326. ","connid":"0","sport":"15,629","_PID":"820","seqnum":"0","MSec":"339.9806","saddr":"1,721,149,632","size":"43","PID":"1224","dport":"14,326","TID":"10008","ProviderName":"Microsoft-Windows-Kernel-Network","PName":"","EventName":"KERNEL_NETWORK_TASK_TCPIP/Datareceived.","daddr":"-23,680,832"}}} I want to get rid of the "event" prefix but none of the optios seems to work.
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorizati... See more...
Hey Splunkers, is there any way to use "Realname" and "Mail" within ProxySSO setup? We are using ProxySSO for authentication and authorization. I figured out that this configuration on authorization.conf works and the user is showing up correctly:   [userToRoleMap_ProxySSO] myuser = myrole-1;myrole-2::test::mymail@test.com     Unfortunately I didn't find any way to populate this information from the ProxySSO information like i did for RemoteGroup and RemoteUser.   Kind Regards
I'd like to know what are the usecases applied on splunk enterprise
Hello. Thank you for all your help and support. In a registered lookup table file (CSV), if I want to search and match the value of a specific field from two columns (two columns), how should I set... See more...
Hello. Thank you for all your help and support. In a registered lookup table file (CSV), if I want to search and match the value of a specific field from two columns (two columns), how should I set the input fields in the automatic lookup setup screen? For example, I have the following columns in my table file PC_Name,MacAddress1,MacAddress2 The MacAddress in the Splunk index log resides in either MacAddress1 or MacAddress2 in the table file. Therefore, we want to search both columns and return the PC_Name of the matching record. As a test, I tried to set the following two input fields to be searched automatically from the Lookup settings screen of the GUI, but PC_Name did not appear in the search result field. *See attached image. *If the following input field setting is one, PC_Name is output. MACAddr1 = Mac address MACAddr2 = Mac address So, as a workaround, I split the lookup settings into two and set each as follows MACAddr1 = MacAddress and MACAddr2 = MacAddress in the input fields to display the search results. However, this is not smart. Note that the lookup is configured from the Splunk Web UI. What is the best way to configure this?
I have a dashboard which gives the below error at user end but when i open the dashboard i dont see any error at my end and it perfectly runs fine with the proper result Error in 'lookup' comma... See more...
I have a dashboard which gives the below error at user end but when i open the dashboard i dont see any error at my end and it perfectly runs fine with the proper result Error in 'lookup' command: Could not construct lookup 'EventCodes, EventCode, LogName, OUTPUTNEW, desc'. See search.log for more details. Eventtype 'msad-rep-errors' does not exist or is disabled. Please help me how to fix this issue.  
Hi, I can see Splunk is vulnerable to openssl 1.0.2zk, I've applied the latest 9.2.2 on Splunk Enterprise and the Universal Forwarder, still running the older 1.0.2zj version. Any ideas when this w... See more...
Hi, I can see Splunk is vulnerable to openssl 1.0.2zk, I've applied the latest 9.2.2 on Splunk Enterprise and the Universal Forwarder, still running the older 1.0.2zj version. Any ideas when this will be remediated? OpenSSL Bulletin on 26 June [ Vulnerabilities ] - /news/vulnerabilities-1.0.2.html (openssl.org) From Splunk Advisory, latest openssl related update was in March for zj version.