All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   Just started using the  Splunk Add-on for Microsoft Security and found out that in ms_security_utils.py on line 137 and 138 request.compat.quote_plus is used. However that gives a incorrect f... See more...
Hi,   Just started using the  Splunk Add-on for Microsoft Security and found out that in ms_security_utils.py on line 137 and 138 request.compat.quote_plus is used. However that gives a incorrect format when the client_id or client_secret has special caracters init like + or =   , it will replcae them with %3 and %5 To get the plugin working I just removed request.compt.quote_plus Kind regards, Arnoud
  Hello All !, Kindly help me to find a solution for this. I need to whitelist the list of hosts ( the host count is >1220 and may add further) from all alerts.  Field name for host is varying ... See more...
  Hello All !, Kindly help me to find a solution for this. I need to whitelist the list of hosts ( the host count is >1220 and may add further) from all alerts.  Field name for host is varying as per correlation search. I have been trying the below options for somedays: 1. Upload the list as Lookup table and whitelist through lookup in all correlation rule (which will cause retroactive alerts) 2. Suppression Rule - Since the host field name is different for each rule, i need to write the suppression rule for each correlation rule. 3. Single Suppression rule - Am not clear how to get the values of hosts in all correlation searches and map into a single field, and then search the values. Currently am trying to write a query to get the host values from `notable` and compare the values.  `notable` | fillnull value=0 Hostname,dest_host,nt_host,Computer_Name | eval whitelist_host=if(Hostname!=NULL, Hostname, if(dest_host!=NULL, dest_host, if(nt_host!=NULL, nt_host,Computer_Name))) | table whitelist_host search_name Hostname dest_host nt_host Computer_Name | dedup whitelist_host `notable` | eval gifted_host=coalesce(coalesce(Hostname,dest_host),nt_host) | table gifted_host | dedup gifted_host Please let me know for any suggestions or if we have any other option.
There is a way to modify HTML page using Splunk interface?  I uploaded an HTML on Splunk file and if I want to modify it I have to do it in local and then reupload it or can I modify it directly fo... See more...
There is a way to modify HTML page using Splunk interface?  I uploaded an HTML on Splunk file and if I want to modify it I have to do it in local and then reupload it or can I modify it directly form Splunk interface?    Thanks in advice and sorry for my bad English  
I created an HTML page with css and java script inside. I uploaded correctly in Splunk (I can see the HTML) but seems that java script isn't working but when I start the HTML page in local it works p... See more...
I created an HTML page with css and java script inside. I uploaded correctly in Splunk (I can see the HTML) but seems that java script isn't working but when I start the HTML page in local it works perfectly. Any idea that solves the problem?   Thanks in advance and sorry for my bad English  
Regex to get only the data cd ab.aaaa.asd.cd
Hey guys,   we use Heavy Forwarders as gateways to Splunk Cloud, so the servers which are logging do not send their logs directly into the internet. Now we want to use the splunk otel collector to ... See more...
Hey guys,   we use Heavy Forwarders as gateways to Splunk Cloud, so the servers which are logging do not send their logs directly into the internet. Now we want to use the splunk otel collector to send kubernetes logs. Is it possible to send the kubernetes logs at first to our heavy forwarders instead of directly into the cloud?   Thanks in advance m
Hey Guys. I have a input that is refusing to work. The input that doesnt work is this fortigate one: This one on the same syslog server works just fine: Check the app on the sy... See more...
Hey Guys. I have a input that is refusing to work. The input that doesnt work is this fortigate one: This one on the same syslog server works just fine: Check the app on the syslog server and both inputs look like the above, so they have been pushed fine from the deployment server. Nothing called fortigate is in splunk:   Recent Log files ARE populated and present on the syslogserver If i search for the host from the fortigate input the following shows up, which to me looks like it should be forwarding logs?  
Hello,  I have the following 2 events : 1st event :      { [-] dimensionMap: { [-] User type: Real users dt.entity.application_method.name: Application } dimensions: [ [-... See more...
Hello,  I have the following 2 events : 1st event :      { [-] dimensionMap: { [-] User type: Real users dt.entity.application_method.name: Application } dimensions: [ [-] APPLICATION_METHOD ] timestamps: [ [-] 1650966840000 1650966900000 1650966960000 1650967020000 1650967080000 1650967140000 1650967200000 1650967260000 1650967320000 1650967380000 1650967440000 ] values: [ [-] 0.47 0.67 0.37 0.45 0.44 0.57 0.48 0.47 0.69 0.70 0.40 ] }     2nd event :      { [-] dimensionMap: { [-] dt.entity.application_method.name: Application } dimensions: [ [-] APPLICATION_METHOD ] timestamps: [ [-] 1650966840000 1650966900000 1650966960000 1650967020000 1650967080000 1650967140000 1650967200000 1650967260000 1650967320000 1650967380000 1650967440000 ] values: [ [-] 18 27 23 19 17 21 24 30 13 10 5 ] }     I would like to bind each value of the 1st event to each value of the 2nd event. I tried some join commands using the timestamps as a common value, but it didn't work. At the end, I would like the following table (result = value1*value2) :       Timestamp                                  Value1                    Value2                        Result 1650966840000                            0.47                          18                              0.47*18 1650966900000                            0.67                          27                              0.67*27 1650966960000                            0.37                          23                              0.37*23 1650967020000                            0.45                          19                              0.45*19 1650967080000                            0.44                          17                              0.44*17 1650967140000                            0.57                          21                              0.57*21 1650967200000                            0.48                          24                              0.48*24 1650967260000                            0.47                          30                              0.47*30 1650967320000                            0.69                          13                              0.69*13 1650967380000                            0.70                          10                              0.70*10 1650967440000                            0.40                           5                                0.40*5 Thank you. Regards,
Hi Splunkers, today I'm facing a problem related to temporal sequence between a multisearch and a search, but let me introduce the context and explain better. In ES, I have to build a correlation s... See more...
Hi Splunkers, today I'm facing a problem related to temporal sequence between a multisearch and a search, but let me introduce the context and explain better. In ES, I have to build a correlation search that must verify 2 events in time order: 1. First, check if a trojan, backdoor or exploit is founded on a destination host, from some source. 2. Then, check is from the same source on the same destination a login and/or an account change is performed. Bonds: use datamodels (if possible) and avoid transaction. Now, I know that I can use: 1. Intrusion Detection for point 1. 2. Authentication and Change for point 2 Now, the search for point one is something like that: | tstats summariesonly=true fillnull_value="N/D" count from datamodel=Intrusion_Detection where IDS_Attacks.signature IN ("*trojan*","*backdoor*","*exploit*") by IDS_Attacks.dest, IDS_Attacks.src, IDS_Attacks.signature, index, host | `drop_dm_object_name("IDS_Attacks")` while, for point 2, due I have 2 different datamodels I builded it with a multisearch: | multisearch [| tstats summariesonly=true prestats=true fillnull_value="N/D" count from datamodel=Authentication where nodename="Authentication.Successful_Authentication" by index, host, Authentication.src, Authentication.dest | `drop_dm_object_name("Authentication")`] [| tstats summariesonly=true prestats=true fillnull_value="N/D" count from datamodel=Change where nodename="All_Changes.Account_Management" by index, host, All_Changes.src, All_Changes.dest | `drop_dm_object_name("All_Changes")` ] | stats count by src, dest, index, host | stats count values(host) as host, values(index) as inde by src, dest I tested both search separately and they work well. Now the point is: how to tell to Splunk that search 1 must trigger before search 2 without transaction?  I thougth about funcion  min(_time) max(_time) and the use of  eval to check is first time occurrence of block 2 is greater than last time occurrence of block 1, but I'm struggling about the correct use of this functions, because the field with time occurrence is always empty, so it's clear I'm wrong in my combined code. Consider for example the multisearch of block 2, where I tested the use of min: | multisearch [| tstats prestats=true fillnull_value="N/D" min(_time) as firstSuccess, count from datamodel=Authentication where nodename="Authentication.Successful_Authentication" AND by index, host, Authentication.src, Authentication.dest | `drop_dm_object_name("Authentication")`] [| tstats prestats=true summariesonly=true fillnull_value="N/D" min(_time) as firstSuccess, count from datamodel=Change where nodename="All_Changes.Account_Management" by index, host, All_Changes.src, All_Changes.dest | `drop_dm_object_name("All_Changes")` ] | stats count min(firstSuccess) as firstSuccess by log_region, log_country, src, dest, index, host The idea is to find the first occurrence in both search of multisearch with min(_time) and then, in the following stats, use min(firstSuccess) to find the smaller one between them; the search show me in output the required fields, except fisrtSuccess which is empty.
Hi, I have some newbie questions. We need to collect Windows/Linux logon events and send them to another system using a forwarder. 1. For Windows, we understand that the options for collecting even... See more...
Hi, I have some newbie questions. We need to collect Windows/Linux logon events and send them to another system using a forwarder. 1. For Windows, we understand that the options for collecting events logs are: (i) Install a forwarder on each Windows machine (ii) Collect the logs remotely over WinRM using a heavy forwarder. Is this correct or are we missing some options? What is the most common way? In case a forwarder is installed on each machine, each one will send the data to the indexer or is it common to use a central forwarder and send to the indexer from there? 2. Are the options similar in Linux? What the common way? 3. The other system will need to correlate the events with a list of machines it gets from somewhere else, where the machines might appear the IP address or the hostname, and it has no way to perform DNS lookups. Is it possible to configure Splunk to forward both IP and hostname/FQDN as part of the event? Thanks, Gabriel
Is there a way or command to make the table results something like on the expected output. current data: hostname ip database_status internet_status proxy_status server101 ... See more...
Is there a way or command to make the table results something like on the expected output. current data: hostname ip database_status internet_status proxy_status server101 192.168.10.2 online online offline server102 192.168.10.3 offline online offline expected output: hostname ip status server101 192.168.10.2 database_status = "online" internet_status = "online" proxy_status = "offline" server102 192.168.10.3 database_status = "offline" internet_status = "online" proxy_status = "offline"
i just upgraded to WiredTiger KV store. i was told that, it will improve the performance. how can i verify that ? does location path changes after upgrading to WiredTiger ?
Hi Splunk experts!! Please tell me about how to bring the deepest data in multiple subsearches. Of course, if there is another way to do it than subsearch, we can use that method as well. I underst... See more...
Hi Splunk experts!! Please tell me about how to bring the deepest data in multiple subsearches. Of course, if there is another way to do it than subsearch, we can use that method as well. I understand that when using multiple subsearches, each subsearch is just passing field results to the top subsearch. But can the data of any field in the first subsearch also be passed to the next subsearch? (same for the second to third subsearch) I am thinking that this is difficult with subsearch because subsearch just passes fields in AND. I believe it can be done with join or stats. But how should I do it?   index=cmdb sourcetype=crm host="fwd-splunk-fwd01a" LogicalName="new_contract" (Attributes.KeyValuePairOfstringanyType{}.new_item_name="DC_Connection" OR Attributes.KeyValuePairOfstringanyType{}.new_circuit.Name="*DC*") [| search index=cmdb sourcetype=crm host="fwd-splunk-fwd01a" LogicalName="new_circuit" FormattedValues.KeyValuePairOfstringstring{}.statecode="active" FormattedValues.KeyValuePairOfstringstring{}.statuscode="active" FormattedValues.KeyValuePairOfstringstring{}.new_circuit_status="contracted" [| search index=cmdb sourcetype=crm host="fwd-splunk-fwd01a" LogicalName="new_circuit_authority" FormattedValues.KeyValuePairOfstringstring{}.statecode="active" FormattedValues.KeyValuePairOfstringstring{}.statuscode="active" FormattedValues.KeyValuePairOfstringstring{}.new_trouble_mail_receive_flag="yes" FormattedValues.KeyValuePairOfstringstring{}.new_valid_flag="yes" [| search index=cmdb sourcetype=crm host="fwd-splunk-fwd01a" LogicalName="new_contactpoint" FormattedValues.KeyValuePairOfstringstring{}.statecode="active" FormattedValues.KeyValuePairOfstringstring{}.statuscode="active" Attributes.KeyValuePairOfstringanyType{}.new_cp_code="CP30058460" | fields Attributes.KeyValuePairOfstringanyType{}.new_contactpointid | stats latest(*) AS * by Attributes.KeyValuePairOfstringanyType{}.new_contactpointid | rename Attributes.KeyValuePairOfstringanyType{}.new_contactpointid AS Attributes.KeyValuePairOfstringanyType{}.new_contactpoint.Id | format ] | fields Attributes.KeyValuePairOfstringanyType{}.new_circuit.Name | stats latest by Attributes.KeyValuePairOfstringanyType{}.new_circuit.Name | rename Attributes.KeyValuePairOfstringanyType{}.new_circuit.Name AS Attributes.KeyValuePairOfstringanyType{}.new_circuit_code | format ] | stats latest by Attributes.KeyValuePairOfstringanyType{}.new_circuit_code | fields Attributes.KeyValuePairOfstringanyType{}.new_circuit_code | rename Attributes.KeyValuePairOfstringanyType{}.new_circuit_code AS Attributes.KeyValuePairOfstringanyType{}.new_circuit.Name ] | fields Attributes.KeyValuePairOfstringanyType{}.new_circuit.Id | stats latest by Attributes.KeyValuePairOfstringanyType{}.new_circuit.Id    
I would like to search for each value in an extracted field. My intial query is as follow:   index=moneta-pro "IPN Post API execution started for the orderRefNo" AND "printOs" | rex field=_raw... See more...
I would like to search for each value in an extracted field. My intial query is as follow:   index=moneta-pro "IPN Post API execution started for the orderRefNo" AND "printOs" | rex field=_raw "(?ms)^(?:[^ \\n]* ){9}(?P<orderId>\\d+)" offset_field=_extracted_fields_boundsd_fields_bounds | table orderId | dedup orderId   which returns following: Now I'd like to use each value in OrderId and use it in search and append to the above table. For example, check the status of the order. Individual query should look like.   index=* " Received response status code as 200 and the message body as" AND orderId=<<each dynamic value from above table>>    
I don't know why I'm finding it so hard, but I want to put the accessess from Windows Event 5145 into a multivalued field and I just can't seem to figure it out.   By default, Splunk just assigns t... See more...
I don't know why I'm finding it so hard, but I want to put the accessess from Windows Event 5145 into a multivalued field and I just can't seem to figure it out.   By default, Splunk just assigns the first value.  So I've been trying to work with this | rex "Accesses:[\s]+(?<AccessList>[^v]*)[\v]+Access Check Results:"       04/25/2022 01:23:16 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=5145 EventType=0 Type=Information ComputerName=test.act.root TaskCategory=Detailed File Share OpCode=Info RecordNumber=984613134 Keywords=Audit Success Message=A network share object was checked to see whether client can be granted desired access. Subject: Security ID: S-1-5-99-99999999-999999999-999999999-99999 Account Name: XXXX Account Domain: act Logon ID: 0x999999 Network Information: Object Type: File Source Address: 10.1.1.100 Source Port: 60000 Share Information: Share Name: \\fileshare\file.xxx Share Path: \??\O:\Shared\fileshare\file.xxx Relative Target Name: target\share Access Request Information: Access Mask: 0x100081 Accesses: SYNCHRONIZE ReadData (or ListDirectory) ReadAttributes Access Check Results:        
Hi, I have a timeline visualization as a panel for a dashboard. When I run the visualization as a standalone practice dashboard in the Search & Reporting app, it works as expected.   ... See more...
Hi, I have a timeline visualization as a panel for a dashboard. When I run the visualization as a standalone practice dashboard in the Search & Reporting app, it works as expected.   However, when I run the EXACT same query with the same visualization fomat, it doe not show the top of the timeline as required:   The query used in both dashboards is as follows: <panel> <viz type="event-timeline-viz.event-timeline-viz"> <search> <query>index=fraud_glassbox sourcetype="gb:hit" SESSION_UUID="652a0e70-bfdf-11ec-9d96-005056bf9975" | rename URL_PATH as label | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(time_epoch) as hour_minute timeformat="%Y-%m-%d %H:%M" | strcat hour_minute ":" SEQUENCE combo_time | rename combo_time as start | eval tooltip = label | table label, start,tooltip</query> <earliest>-7d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="event-timeline-viz.event-timeline-viz.backgroundColor">#ffffff</option> <option name="event-timeline-viz.event-timeline-viz.eventColor">#d5ddf6</option> <option name="event-timeline-viz.event-timeline-viz.maxZoom">3600000</option> <option name="event-timeline-viz.event-timeline-viz.minZoom">60000</option> <option name="event-timeline-viz.event-timeline-viz.orientation">top</option> <option name="event-timeline-viz.event-timeline-viz.stack">true</option> <option name="event-timeline-viz.event-timeline-viz.tokenAllVisible">tok_et_all_visible</option> <option name="event-timeline-viz.event-timeline-viz.tokenData">tok_et_data</option> <option name="event-timeline-viz.event-timeline-viz.tokenEnd">tok_et_end</option> <option name="event-timeline-viz.event-timeline-viz.tokenLabel">tok_et_label</option> <option name="event-timeline-viz.event-timeline-viz.tokenStart">tok_et_start</option> <option name="event-timeline-viz.event-timeline-viz.tooltipDateFormat">DD-MMM-YYYY</option> <option name="event-timeline-viz.event-timeline-viz.tooltipTimeFormat">h:mm:ss A</option> <option name="height">346</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> What may be the reason for this? Thanks, Patrick
Hi, I managed to get my regex101 expression working, however, I am not able to get it working in splunk.  I would like to extract only the location ID's that are listed in the _raw if they are prec... See more...
Hi, I managed to get my regex101 expression working, however, I am not able to get it working in splunk.  I would like to extract only the location ID's that are listed in the _raw if they are preceded with the text "Location not found.ID: "   Test string: Location not found. ID: ABC000123244343 Regex101 copied value: /[ABC0]\w+[a-zA-Z0-9]/gm   However, when I tried the below in splunk it didn't provide me the results I expected:   | from datamodel:"xyzlogs" | fields _raw | where like(_raw,"%Location not found.ID: ABC000%") | rex field=_raw "(?P<Location_id>/[ABC0]\w+[a-zA-Z0-9]/gm)"     Any help would be appreciated. Thank you.  
Hi, I’m trying to make a stacked bar chart visualization where my y axis is milliseconds, my x axis is a task ID, and I’m splitting by a stage ID. My query is: | chart max("duration") over task_id b... See more...
Hi, I’m trying to make a stacked bar chart visualization where my y axis is milliseconds, my x axis is a task ID, and I’m splitting by a stage ID. My query is: | chart max("duration") over task_id by "stage_id" | table task_id, stage_1, stage_2, stage_3, * In my results, tasks where stage 1 occurred are so long that they make all the other bars look really tiny. Is there a way that I could add to my query to filter out the task_ids where stage_1 occurred?
Hi All,   I have setup Splunk behind a reverse proxy and all works fine when the port used by the proxy to receive traffic is 443, however when the host port in docker-compose is changed and a root... See more...
Hi All,   I have setup Splunk behind a reverse proxy and all works fine when the port used by the proxy to receive traffic is 443, however when the host port in docker-compose is changed and a root_endpoint is being used Splunk returns "404 page not found".    Example 1 - Splunk-Traefik-without-Root-Endpoint https://gist.github.com/lluked/771a1f7f9bbd8ef2581e8828f3b25f9e When the proxy (Traefik) host port is mapped to 443, Splunk is accessible at https://localhost:443   ports: - "80:80" - "443:443"   When the proxy (Traefik) host port is mapped to 8443, Splunk is accessible at https://localhost:8443   ports: - "80:80" - "8443:443"   Both of these scenarios work as expected.   Example 2 - Splunk-Traefik-with-Root-Endpoint https://gist.github.com/lluked/438b10a6321ff50feb8d704690a0cafc When the proxy (Traefik) host port is mapped to 443, Splunk is accessible at https://localhost:443/splunk   ports: - "80:80" - "443:443"   When the proxy (Traefik) host port is mapped to 8443, Splunk returns error 404 at https://localhost:8443/splunk   ports: - "80:80" - "8443:443"   When the proxy (Traefik) host port is mapped to 443, but this is on a vm and a port on the host  is mapped to 443 Splunk returns error 404 again (For example using Vagrant and mapping 8443 on the host to 443 on the vm and visiting https://localhost:8443/splunk )   ports: - "80:80" - "443:443" config.vm.network "forwarded_port", id: "traefik_websecure", host: 8443, guest: 443   It's like Splunk is detecting requests are coming from a different port and throwing a 404 but only when  root_endpoint is being used, and I cannot find any documentation relating to this.   Please can anyone help?
Hi, I need to set at the same time in transforms.conf a new index and set a new metadata  based on the host name. New index=switchoob New metadata=tecnologia Like this: [force_IndexVMW] SOURCE_... See more...
Hi, I need to set at the same time in transforms.conf a new index and set a new metadata  based on the host name. New index=switchoob New metadata=tecnologia Like this: [force_IndexVMW] SOURCE_KEY = MetaData:Host REGEX = ^ob\w+ DEST_KEY = _MetaData:Index FORMAT = switchoob [force_tecnologiaVMW] SOURCE_KEY = MetaData:Host REGEX = ^ob\w+ DEST_KEY = _meta FORMAT = NFV_SITE::DC02_MIBER tecnologia::vmw I have tried to find "More than one DEST_KEY" article but the link is wrong. Thank You