All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am in the process of normalizing data, so I can apply it to a data model. One of the fields which is having issues is called user. I have user data in some logs, while other logs have an empty u... See more...
I am in the process of normalizing data, so I can apply it to a data model. One of the fields which is having issues is called user. I have user data in some logs, while other logs have an empty user field - but do have data in a src_user field. Tried using the coalesce command - but that does not seem to work. EVAL-user = coalesce(user, src_user)   Is it because I am trying to reference the user field?  Are there any other work arounds to create a user field when it is empty and filling data from src_user ? Remember, some logs have valid data in the user field. Others have no data.   Thanks!                
Hi All, I'm having issues with ingesting my CSV files properly into Splunk and did not come across any current Q&A that could help my specific issue. An example of a couple rows of data in my CSV... See more...
Hi All, I'm having issues with ingesting my CSV files properly into Splunk and did not come across any current Q&A that could help my specific issue. An example of a couple rows of data in my CSV are as follows with their respective header field at the top of the file, Plugin ID CVE CVSS v2.0 Base Score Risk Host Protocol Port Name Synopsis Description Solution See Also Plugin Output STIG Severity CVSS v3.0 Base Score CVSS v2.0 Temporal Score CVSS v3.0 Temporal Score Risk Factor BID XREF MSKB Plugin Publication Date Plugin Modification Date Metasploit Core Impact CANVAS               135860     None host2.web.com tcp 445 WMI Not Available WMI queries could not be made against the remote host. WMI (Windows Management Instrumentation) is not available on the remote host over DCOM. WMI queries are used to gather information about the remote host, such as its current state, network interface configuration, etc. Without this information Nessus may not be able to identify installed software or security vunerabilities that exist on the remote host. n/a https://docs.microsoft.com/en-us/windows/win32/wmisdk/wmi-start-page Can't connect to the 'root\CIMV2' WMI namespace.   None       4/21/20 12/21/22                     166602     None host2.web.com tcp 0 Asset Attribute: Fully Qualified Domain Name (FQDN) Report Fully Qualified Domain Name (FQDN) for the remote host. Report Fully Qualified Domain Name (FQDN) for the remote host. n/a   The FQDN for the remote host has been determined to be:   FQDN       : host2.web.com   Confidence : 100   Resolves   : True   Method     : rDNS Lookup: IP Address Another possible FQDN was also detected:         None       10/27/22 10/27/22                       For the second event's Plugin Output field, it keeps reading each new line as a new row. A lot of the rows contain similar data which is causing there to be far more logged events than there are rows in the CSV file.  How can I ensure these fields get parsed properly to keep each row within one event and each cell as it's own field? I have tried a handful of configurations and am currently working with the following, props.conf     [csv] INDEXED_EXTRACTIONS = csv DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = true NO_BINARY_CHECK = true CHARSET = AUTO KV_MODE = none pulldown_type = true [scan_reports] REPORT-scan_reports = csv_fields     transforms.conf     [csv_fields] DELIMS = "," FIELDS = "Plugin ID", "CVE", CVSS v2.0 Base Score", "Risk", "Host", "Protocol", "Port", "Name", "Synposis", "Description", "Solution", "See Also", "Plugin Output", "STIG Severity", "CVSS v3.0 Base Score", "CVSS v2.0 Temporal Score", "CVSS v3.0 Temporal Score", "Risk Factor", "BID", "XREF", "MSKB", "Plugin Publication Date", "Plugin Modification Date", "Metasploit", "Core Impact", "CANVAS"       Any help will be greatly appreciated! 
10.179.130.56 - - [14/Apr/2023:01:59:28.233 +0800] "POST /services/broker/phonehome/connection_10.179.130.56_8089_10.179.130.56_MYETKPWSQL002_918B12BB-35AB-452A-BAEB-592395125496 HTTP/1.1" 200 530 "-... See more...
10.179.130.56 - - [14/Apr/2023:01:59:28.233 +0800] "POST /services/broker/phonehome/connection_10.179.130.56_8089_10.179.130.56_MYETKPWSQL002_918B12BB-35AB-452A-BAEB-592395125496 HTTP/1.1" 200 530 "-" "Splunk/8.2.7 (Windows Server 10 Standard Edition; arch=x64)" - 1ms 10.16.36.90 - - [13/Apr/2023:18:27:12.290 +0000] "POST /services/broker/phonehome/connection_10.16.36.90_8089_usseacwsrv190.us.xyz.com_usseacwsrv190_4D304A0A-05E2-483B-B2B5-7CF8A8928B7A HTTP/1.1" 200 24 "-" "Splunk/8.2.7 (Windows Server 10 Datacenter Edition; arch=x64)" - 2ms Hi Everyone  Please help me with the regex to extract the following fields highlighted in bold.
I am doing some analysis on our existing searches. What I would like to do is run the saved search when I get the result from a rest search. Right now I have the following, which was working at one p... See more...
I am doing some analysis on our existing searches. What I would like to do is run the saved search when I get the result from a rest search. Right now I have the following, which was working at one point but not sure why it isn't now.     |rest /servicesns/-/-/saved/searches | where title = "test" | fields search | map search="|makeresult | map search="$search$""     I have used this before and it was working but some reason now it is not but I can't tell why. Right now I am just running it in a search not in a dashboard. 
The splunk docs have this for the bubble chart format: <stats_command> <y-axis_field> <x-axis_field> <bubble_size_field> The UI displays this for the format of the bubble chart:  | stats x_value_ag... See more...
The splunk docs have this for the bubble chart format: <stats_command> <y-axis_field> <x-axis_field> <bubble_size_field> The UI displays this for the format of the bubble chart:  | stats x_value_aggregation y_value_aggregation size_aggregation  Why are the x and y values flipped? In my bubble chart for over 4 hours of data, it works where the x value is first. But if I change the time period to 15 minutes the x value is now the second data point. The first point is now not even the y axis. It is now displayed in the legend. The category I picked is no longer the legend.
How can I collect data from  “serverless” devices?
Hello,   I am attempting to configure splunk to allow users to authenticate via CAC card using LDAP. However when I attempt to log in I get forwarded to a page that simply says "Unauthorized". This... See more...
Hello,   I am attempting to configure splunk to allow users to authenticate via CAC card using LDAP. However when I attempt to log in I get forwarded to a page that simply says "Unauthorized". This suggested to me that splunk is successfully reading my card, but rejecting my credentials for some reason.  Checking splunkd.log shows that whenever I attempt to log in i get the message "Account John D Johnson does not exist".    Looking in active directory users and computers the account splunk is searching for from the card does seem to not exist, however I'm able to log in to my computer with it, so it must exist in some capacity.   My thoughts are that splunk is searching for the account with a field that does not match the field it is looking for in AD. Is there any way to tell splunk what value it should be trying to match on the CAC card in AD?   I tried changing the values of userNameAttribute in authorize.conf but it seems to have had no affect. My config files are below. authentication.conf [authentication] authSettings = xx authType = LDAP [xx] SSLEnabled = 1 anonymous_referrals = 1 bindDN = xx bindDNpassword =xx charset = utf8 emailAttribute = mail enableRangeRetrieval = 0 groupBaseDN = OU=IT,OU=Groups,OU=RM,DC=xx,DC=xx,DC=xx groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = xx nestedGroups = 0 network_timeout = 20 pagelimit = -1 port = 636 realNameAttribute = displayname sizelimit = 30000 timelimit = 30 userBaseDN = DC=xx,DC=xx,DC=xx userNameAttribute = userprincipalname #userBaseDN = DC=xx,DC=xx,DC=xx #userNameAttribute = samaccountname [roleMap_xx] admin = xx SPLUNK Admins isso normal user = xx SPLUNK isso Normal Users operations normal user = xx SPLUNK Operations Normal Users user = xx SPLUNK Admins   web.conf [settings] httpport = 8000 enableSplunkWebSSL = 1 requireClientCert = 1 sslRootCAPath = C:\Program Files\Splunk\etc\auth\safezone\combined_pivfirst.pem enableCertBasedUserAuth = 1 SSOMode = permissive trustedIP = 127.0.0.1 certBasedUserAuthMethod = commonname privKeyPath = etc\auth\splunkweb\xx.key serverCert = etc\auth\splunkweb\xx.pem loginBackgroundImageOption = custom loginCustomBackgroundImage = search:logincustombg/Warning_for_Official_Use_Only!.jpg tools.sessions.timeout = 5
Hi Team, I have a notable event (Excessive Failed Logins on Multiple Targets) that I'm expecting to see the "dest" field. I've fleshed out asset summary and source all of the source details are popu... See more...
Hi Team, I have a notable event (Excessive Failed Logins on Multiple Targets) that I'm expecting to see the "dest" field. I've fleshed out asset summary and source all of the source details are populating. I'm seeing dest in other different notable events too. It's just this particular notable event. If I pull up the correlated events dest shows as a field and I can validate that values are accurate too. Any reason why dest wouldn't be showing up in the additional fields?
I have a Syslog collector receiving logs from multiple Syslog devices and writing them in a directory-structured log file. The same host runs as my HF. One of those .log files, I want to read using ... See more...
I have a Syslog collector receiving logs from multiple Syslog devices and writing them in a directory-structured log file. The same host runs as my HF. One of those .log files, I want to read using [monitor] and send to a specific indexer (10.20.30.40:9998) where others continued to be read by their respective monitors and sent to one of the indexers as chosen by indexerDiscovery (some configs in my below outputs are missing but that works as that's not the issue). the issue is - I am facing difficulty in onboarding logs from this .log file, however, I am getting the internal logs of the HF which means there is no networking issue. below are my configs, any correction in them would be highly appreciated. inputs.conf ---------------------------------------------------------- [monitor:///var/log/splunk/Checkpoint/*/*.log] disabled = 0 Sourcetype = cp_log index = index1 host_segment = 5 # _TCP_ROUTING = isolationGroup [monitor:///var/log/splunk/Checkpoint_sys/*/*.log] disabled = 0 Sourcetype = cp_log index = index1 host_segment = 5 props.conf ---------------------------------------------------------- [cp_log] TRANSFORMS-routing = isolationRouting transforms.conf ---------------------------------------------------------- [isolationRouting] REGEX = .* DEST_KEY = _TCP_ROUTING FORMAT = isolationGroup outputs.conf ---------------------------------------------------------- [tcpout:my_indexers] clientCert = /opt/splunk/etc/auth/my_certs/my_serverCert.pem indexerDiscovery = MyIndexersDiscovery useACK = true [tcpout:isolationGroup] server = 10.20.30.40:9998  
Hi All, I am facing some issue in using lookup command. Need your suggestions here please.. I have a lookup file as below: In that I have same host under different base. Base Host Categ... See more...
Hi All, I am facing some issue in using lookup command. Need your suggestions here please.. I have a lookup file as below: In that I have same host under different base. Base Host Category X device1 Lin X device2 Win X device3 Lin M device2 Lin M device14 Win M device15 Win I need to compare the hosts (from Base 'M') with hostname reporting under particular index and need to get the list of matching hosts. Query: index=indexA | lookup lookupfilename Host as hostname OUTPUTNEW Base,Category | fields hostname,Base,Category | stats count by hostname,Base,Category | where Base="M" As per my lookup file, I should get output as below (considering device2 & device14 available in splunk index) hostname Base Category device2 M Lin device14 M Win But I am getting 2 entries under device2 as below (entry under category "Win" is incorrect) : hostname Base Category device2 M Win device2 M Lin device14 M Win Please help me on the query that I have framed. thanks in advance
Is it possible to add fields in a chart tooltip to make it more informative? I want to do this in the xml dashboard itself without creating any additional js files. 
Good Morning, I have a query that I'd like to refine. I'm new to Splunk. So the current query that I'm running is used to identify when people outside of the country connect to our VPN. i... See more...
Good Morning, I have a query that I'd like to refine. I'm new to Splunk. So the current query that I'm running is used to identify when people outside of the country connect to our VPN. index=company_logs "Client Type: Cisco AnyConnect VPN Agent" | iplocation src | stats dc(src) by Country So this works fine for giving us a tally as to how many total connections are initiated out of the country. I'd like to get more granular and have a breakdown by username. This is the field that contains the username: Cisco_ASA_user. How can I adjust the query to include that data? Thanks!
Recently we discovered that our Splunk sendemail command in combination with the sendcsv option is no longer using the same order for the columns as the search itself. We suspect that that has been b... See more...
Recently we discovered that our Splunk sendemail command in combination with the sendcsv option is no longer using the same order for the columns as the search itself. We suspect that that has been broken since we upgraded from 8.x to 9.0.3. We've tried messing around with the width_sort_columns, but this hasn't produced the results we're looking for. Has anyone else experienced the same issues and maybe already found a solution?
Two part question: 1) If I complete an individually owned App upgrade to our Victoria environment through the use of "Install App From File" which changes a props.conf setting, can I run a "debug/r... See more...
Two part question: 1) If I complete an individually owned App upgrade to our Victoria environment through the use of "Install App From File" which changes a props.conf setting, can I run a "debug/refresh" to reload the Splunk Cloud configs?    2) Or, if I manage the App change through ACS CLI to edit_local_apps https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Config/RBAC Upgrade app (Victoria) PATCH apps/victoria/{app} edit_local_apps AND install_apps   will my change circumnavigate a restart of the environment, and reload the configs simply by the use of ACS CLI?
I am using Dashboard Studio, and When I create a table viz the scroll is not working, and neither is the next button.  When I try to choose 2, 3, or next, nothing works.  Thank you in advance. ... See more...
I am using Dashboard Studio, and When I create a table viz the scroll is not working, and neither is the next button.  When I try to choose 2, 3, or next, nothing works.  Thank you in advance.  
Hi, I need your help in order to get the difference between two searches. I have a task running once a day on all my servers and if the task is succeed it generates an event log that is sent to Spl... See more...
Hi, I need your help in order to get the difference between two searches. I have a task running once a day on all my servers and if the task is succeed it generates an event log that is sent to Splunk.  I need to know which servers didn’t generate that event. At this moment the result should be 1 server that is offline. But I don't get any results. But each search returns the list of my servers - 1st search is a lookup table (static) with all my servers: | inputlookup ctx_arc_hardware.csv | where HW_State="Active" AND (Group="XenApp APPS" OR Group="XenApp RBT") | table Hostname | rename Hostname as ComputerName - 2nd search (aleatory) is the list of servers that has a specific event generated once a day from the eventvwr index: index=eventviewer sourcetype=ctxevent EventCode=200 earliest=-8h | table ComputerName After google it, I found these 2 ways, but I'm not getting the result I want: | set diff [search index=eventviewer sourcetype=ctxevent EventCode=200 earliest=-8h | table ComputerName] [search inputlookup ctx_arc_hardware.csv | where HW_State="Active" AND (Group="XenApp APPS" OR Group="XenApp RBT") | table Hostname |rename Hostname as ComputerName] And also tried: | inputlookup ctx_arc_hardware.csv | where HW_State="Active" AND (Group="XenApp APPS" OR Group="XenApp RBT") | table Hostname | rename Hostname as ComputerName Where NOT [search index=eventviewer sourcetype=ctxevent EventCode=200 earliest=-26m | table ComputerName,] Can you point me in the right direction?
I am new to Regex expressions and trying to figure them out. I am trying to extract two sections of the following log field:  5002:fromhost=999.99.99.99:fromport=3299:sid=92ac3498-d95d-11ed-af19-... See more...
I am new to Regex expressions and trying to figure them out. I am trying to extract two sections of the following log field:  5002:fromhost=999.99.99.99:fromport=3299:sid=92ac3498-d95d-11ed-af19-92eb6037d638:respcode=OK:resptime=7:node=999999ss03:nodePort=5002:cosId=asasasa I want the IP address that shows after fromhost and the COSID value asasasa at the end of the field and not having much luck
Hello,  I'm trying to set up the SA-eventgen app and doing the simple tutorial with sample files but got this fatal error "function \"seqfile\" not defined" and this intermediate "error parsing t... See more...
Hello,  I'm trying to set up the SA-eventgen app and doing the simple tutorial with sample files but got this fatal error "function \"seqfile\" not defined" and this intermediate "error parsing token newline: {{$templateData}}"  : And I don't know why...  Here is my eventgen located in misp42splunk app   sudo cat ./etc/apps/misp42splunk/default/eventgen.conf [film.json] index = main count = 100 mode = sample end = 1 autotimestamp = true sourcetype = json source = /opt/splunk/sources/film.json token.0.token = "FILM_ID":(\d+) token.0.replacementType = integerid token.0.replacement = 100 token.1.token = "REGION_ID":(\d+) token.1.replacementType = seqfile token.1.replacement = /opt/splunk/etc/apps/sample_conf/samples/count10.txt        
I have a multiselect for software version (version is just yyyy.mm.dd or an alphanumeric string). If the user selects a certain version, multiselect has to hide all its entries below selected versio... See more...
I have a multiselect for software version (version is just yyyy.mm.dd or an alphanumeric string). If the user selects a certain version, multiselect has to hide all its entries below selected version, so that user would only see multiselect versions higher than what he selected. Is this possible? <input type="multiselect" token="VERSION" searchWhenChanged="true"> <label>Version</label> <fieldForLabel>VERSION</fieldForLabel> <fieldForValue>VERSION</fieldForValue> <choice value="all">ALL</choice> <!--<default>all</default>--> <!--<initialValue>all</initialValue>--> <valuePrefix>'</valuePrefix> <valueSuffix>'</valueSuffix> <delimiter> AND </delimiter> <change> <eval token="form.VERSION">if(mvcount('form.VERSION')=3, mvindex('form.VERSION',0, 1),'form.VERSION')</eval> </change> <search> <query>| dbxquery query="select distinct VERSION from table WHERE blah blah blah" connection="blah" maxrows=0</query> </search> </input>   Basically we're trying to get a setup for selecting a range of versions, like 2022.01.01 to 2023.01.02 using two multiselect inputs.
Hi, I'm looking for the search to  exclude the ips  present in the  lookup table   ips                             comments 142.45.2.3                     scanner 123.4.45.22                   ... See more...
Hi, I'm looking for the search to  exclude the ips  present in the  lookup table   ips                             comments 142.45.2.3                     scanner 123.4.45.22                     network 123.66.33.4               alert scanner 123.45.7.9              cisa scanner I'm trying to exclude the ips with the name scanner in the comments section    Thanks