All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi this css class dont return results what is wrong please? <row depends="$STYLES$"> <panel> <html> <style> .intro { background-color: yellow; } ... See more...
hi this css class dont return results what is wrong please? <row depends="$STYLES$"> <panel> <html> <style> .intro { background-color: yellow; } </style> </html> </panel> </row> <row> <panel> <html> <p class="intro">TUTU.</p> </html> </panel> </row>
Hello, i have checkboxes that will serve as filters. Now i want to color code the Text next to the checkbox NOT the Label on top. I already got that working: #input_severity_low { text-shadow: 1px... See more...
Hello, i have checkboxes that will serve as filters. Now i want to color code the Text next to the checkbox NOT the Label on top. I already got that working: #input_severity_low { text-shadow: 1px 1px 2px black, 0 0 25px green, 0 0 5px darkgreen; font-variant: small-caps; } Howeverthis really only affects the label over the checkbox. I want the Text of (next to) the checkbox to be altered. My Web-Analyzer shows me something like: <label data-test="label" for="clickable-ae3424f7-85da-4201-9152-a98bf237f15d" data-size="medium" class="SwitchStyles__StyledLabel-tie4e6-7 hGDbnW"> 4 - Low (<some number>)</label> However the CSS Syle does not react to the class. Anyone any ideas?   Kind regards, Mike
Hi all, I have a token "range" which is in the format 0-2, 2-5, 5-10, 10-100 .. I am splitting it by "-" and saving the the values as "minor" and "major". When i try to use those values in the query... See more...
Hi all, I have a token "range" which is in the format 0-2, 2-5, 5-10, 10-100 .. I am splitting it by "-" and saving the the values as "minor" and "major". When i try to use those values in the query i am not able to get the results. The query is as follows. search index= "abc" sourcetype="xyz"| eval range = "$time$"| eval temp=split(range,"-")| eval minor=mvindex(temp,0)| eval major=mvindex(temp,1)|search duration>minor AND duration<=major| table task duration URL I am not able to display the table. Can anyone please help me in this.
I inherited an old splunk environment where all data was indexed into the main index. I have setup a new environment with multiple indexes and some parsing rules on a heavy forwarder (These configs w... See more...
I inherited an old splunk environment where all data was indexed into the main index. I have setup a new environment with multiple indexes and some parsing rules on a heavy forwarder (These configs work perfectly with the universal forwarders I have deployed).  How would I forward the data from the original main index, into the heavy forwarder for redistribution into the new indexes?
Hi, I am trying to configure PaloAlto logs via the Splunk Connect for Syslog. I followed the instructions here  https://splunk.github.io/splunk-connect-for-syslog/main/sources/PaloaltoNetworks/ I... See more...
Hi, I am trying to configure PaloAlto logs via the Splunk Connect for Syslog. I followed the instructions here  https://splunk.github.io/splunk-connect-for-syslog/main/sources/PaloaltoNetworks/ I configured the syslog at PaloAlto according the instructions. I also c I can see the syslog connections arriving to the host from the firewall using the command tcpdump port 514. Add the following lines to splunk_metadata.csv   pan_config,index,test pan_correlation,index,test pan_globalprotect,index,test pan_hipmatch,index,test pan_log,index,test pan_system,index,test pan_threat,index,test pan_traffic,index,test pan_userid,index,test       And restart sc4s   systemctl restart sc4s   I checked the index test and it is empty. I enabled the debug by adding with the line in the env_file   SC4S_DEST_GLOBAL_ALTERNATES=d_hec_debug   and it seems like the index defined in the spplunk_metadata.csv is not taken, instead osnix is used.   curl -k -u "sc4s HEC debug:$SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN" "https://splunk.XX.XXX.XXu:8088/services/collector/event" -d '{"time":"1643726324.000","sourcetype":"nix:syslog","source":"program:","index":"osnix","host":"atlas-fw-01.XXX.XX.XX","fields":{"sc4s_vendor_product":"nix_syslog","sc4s_syslog_severity":"info","sc4s_syslog_format":"rfc5424_strict","sc4s_syslog_facility":"user","sc4s_proto":"UDP","sc4s_loghost":"xxxxxxxxxx","sc4s_fromhostip":"192.168.10.100","sc4s_destport":"514","sc4s_container":"xxxxxxxx"},"event":"2022-02-01T14:38:44.000+00:00 atlas-fw-01.xxx.xxx.xxx - - - - 1,2022/02/01 15:38:43,011901021137,TRAFFIC,end,2561,2022/02/01 15:38:43,192.168.20.63,157.240.27.54,154.14.118.254,157.240.27.54,Normal traffic,xxx\\yyy,,quic,vsys1,Internal,External,ae1,ae2.6,Splunk,2022/02/01 15:38:43,113676,1,56081,443,49985,443,0x400019,udp,allow,7358,2250,5108,19,2022/02/01 15:36:43,0,any,,7030011678692056750,0x0,192.168.0.0-192.168.255.255,Germany,,7,12,aged-out,0,0,0,0,,atlas-fw-01,from-policy,,,0,,0,,N/A,0,0,0,0,c8250554-4ccd-46e3-8498-e74cfe9cdd10,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2022-02-01T15:38:44.130+01:00,,,infrastructure,networking,browser-based,1,tunnel-other-application,,quic,no,no,0"}'    I already check and the HEC token is allowed to index test. Could someone tell me what is happening? thanks
I have been asked to start monitoring several Windows servers for computer consumption i.e. CPU and memory consumption. I'm looking at 15 second sample intervals. A metrics index is the natural place... See more...
I have been asked to start monitoring several Windows servers for computer consumption i.e. CPU and memory consumption. I'm looking at 15 second sample intervals. A metrics index is the natural place to place this. I'm wondering how much license it will consume per system? I was thinking the following approach might work and I'd be interested in peer review (not that I can say I'm a peer of many on this forum given my noob level at Splunk! - Use a VM with 1 vCPU and a 15s sample interval for CPU and Memory and sent to a dedicated metrics index Collect for 12 hrs View license consumption I then could say that adding another vCPU to a system would require X amount in licensing based on the same sample interval. Is this reasonable? 
Hi,   I'm trying to exclude events from the time range.     index = _internal | eval Hour=strftime(_time,"%H") | eval Minute=strftime(_time,"%M") | eval DayofWeek=strftime(_time,"%w") | eval Mo... See more...
Hi,   I'm trying to exclude events from the time range.     index = _internal | eval Hour=strftime(_time,"%H") | eval Minute=strftime(_time,"%M") | eval DayofWeek=strftime(_time,"%w") | eval Month=strftime(_time,"%m") | eval WeekOfYear=strftime(_time,"%U") | search NOT DayofWeek=3 AND Hour>10 Hour<13   from the above query trying to exclude Wednesday and in between 10 to 13, but it excludes all the day. Can anyone have suggestions? Have one more scenario, need to exclude Monday and Wednesday particular hours.
Hi,  i already did some research but seems our case is a bit special: We colllect inventory and performance data from our vCenters with the Add-On for VMware (Splunk_TA_vmware) in version 4.0.2. Th... See more...
Hi,  i already did some research but seems our case is a bit special: We colllect inventory and performance data from our vCenters with the Add-On for VMware (Splunk_TA_vmware) in version 4.0.2. The heavy forwarder running this TA is also the DCN.  I am not able to restrict data collection of performance data with the given options.  The interval can be set to higher or lower value but the data gathered from the worker is still the same. It collects all since the last input.  Due to the fact, that we dont need performance data every 20 seconds - i would prefer an 1 hour average event . Or if not possible 1 Events per 30 minutes with the latest values.  Is there a way to achieve this? Doesn't matter if it is by design or splunk workaround.  Example raw data: vm-44 500c714d-861b-2f53-1f7f-16d8e72c4e28 aggregated 20 0.04 0.04 2.79 2.73 389 410 
I am doing a CTF that provides logs to filter and work through, one of the questions asks for the time period between when the brute force attack was carried out and the last requests that was sent  ... See more...
I am doing a CTF that provides logs to filter and work through, one of the questions asks for the time period between when the brute force attack was carried out and the last requests that was sent  To find the first timestamp I used ``` index=botsv1 imreallynotbatman.com source="stream:http" form_data=*username*passwd* | regex "passwd=batman"| table _time | sort by _time | head 1``` similar to that I used  ``` index=botsv1 imreallynotbatman.com source="stream:http" form_data=*username*passwd* | regex "passwd=batman"| table _time | sort by _time | tail 1``` each search query works fine by itself but when used together they don't, also when trying to use ``` eval start_time = index=botsv1 imreallynotbatman.com source="stream:http" form_data=*username*passwd* | regex "passwd=batman"| table _time | sort by _time | head 1``` throws and error  Error """ : Comparator '=' has an invalid term on the left hand side: start_time=index."""  how do I chose the first and last datetime form the table created without using two queries 
I have 2 columns 1 has application name another has number of  instances . I want to remove duplicate application name but same time instance count should show addition of all the instance for the sa... See more...
I have 2 columns 1 has application name another has number of  instances . I want to remove duplicate application name but same time instance count should show addition of all the instance for the same application name. I'm using dedup but instance count addition need some other logic.    APPNAME INSTANCECOUNT sap 2 oracle  4 sap  2 git 2 oracle 4    
Currently We are able to ingest AWS cloud watch logs to Splunk. In the similar way, Is it possible to ingest AWS x-ray logs to Splunk?
Hi, all! Here's my current time format! How could I adjust into the format from 2022-01-20 18:21:19,448 to 2022-01-20 18:00  
Hello, I have a condition when the variable new_tag of the previous row is equal to 1 and the variable test_tag of the current row is equal to 1 I must subtract the start value of the previous row w... See more...
Hello, I have a condition when the variable new_tag of the previous row is equal to 1 and the variable test_tag of the current row is equal to 1 I must subtract the start value of the previous row with the start value of the current row. I want the result of the subtraction to be written in the previous row in the result column. Unfortunately I could only get this subtraction to be written to the current row in the result column. Please could someone help me, thank you very much.  
I'm receiving the below error. I am on Splunk Enterprise 8.1.3 and using Solarwinds add-on version 1.2.0. The below is being generated on the heavy forwarder. The heavy forwarder is using Python v... See more...
I'm receiving the below error. I am on Splunk Enterprise 8.1.3 and using Solarwinds add-on version 1.2.0. The below is being generated on the heavy forwarder. The heavy forwarder is using Python version 3. We can successfully ping the Solarwinds server from this heavy forwarder. Does anybody have an idea as to what the issue might be here? 2022-02-01 11:59:28,107 +0000 log_level=ERROR, pid=5855, tid=Thread-4, file=ta_data_collector.py, func_name=index_data, code_line_no=113 | [stanza_name="OT_sowin_query"] Failed to index data Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/data_collection/ta_data_collector.py", line 109, in index_data self._do_safe_index() File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/data_collection/ta_data_collector.py", line 129, in _do_safe_index self._client = self._create_data_client() File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/data_collection/ta_data_collector.py", line 99, in _create_data_client self._data_loader.get_event_writer()) File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/ta_cloud_connect_client.py", line 20, in __init__ from ..core.pipemgr import PipeManager File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/__init__.py", line 1, in <module> from .engine import CloudConnectEngine File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/engine.py", line 6, in <module> from .http import HttpClient File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/http.py", line 26, in <module> 'http_no_tunnel': socks.PROXY_TYPE_HTTP_NO_TUNNEL, AttributeError: module 'socks' has no attribute 'PROXY_TYPE_HTTP_NO_TUNNEL'
Gooood Morning I need some advice, we have several sources of Information about our Company assets, i know not ideal but better then dont know any. So i wrote a script thats collects everything ... See more...
Gooood Morning I need some advice, we have several sources of Information about our Company assets, i know not ideal but better then dont know any. So i wrote a script thats collects everything from these Asset sources and writes the Info to a big KV Store. (1.5GB) on the Splunk-ES SH. The script does that every 6h.  No i want to add these Info to the Splunk ES Asset- und Identitäts-Management. How do i aliase a kvstore field name so its CIM compliance with the required fieldnames as stated here. https://docs.splunk.com/ ? I thought about fieldaliases in a props.conf as per normal datasources. But im not sure to use the collection name as a source in the stanza?        [source::ipam_assets_collection] FIELDALIAS-asset_ip = Address AS ip         Is there a better way?
Hi, I am using Splunk 8.2.1 and I have configured the docker daemon to send logs to Splunk via an HTTP collector. I have set up the source "swarm:docker" with the following props.conf file      ... See more...
Hi, I am using Splunk 8.2.1 and I have configured the docker daemon to send logs to Splunk via an HTTP collector. I have set up the source "swarm:docker" with the following props.conf file       [swarm:docker] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = Log Swarm disabled = false pulldown_type = true       The logs arrive in splunk with the right source type, but I don't have the fields extraction...I don't understand what's wrong   Can you help Me ?
how parsing xml data ?     <v8e:Event> <v8e:Level>Information</v8e:Level> <v8e:Date>2022-01-26T16:20:24</v8e:Date> <v8e:ApplicationName>Job</v8e:ApplicationName> <v8e:ApplicationPresentation>Ф... See more...
how parsing xml data ?     <v8e:Event> <v8e:Level>Information</v8e:Level> <v8e:Date>2022-01-26T16:20:24</v8e:Date> <v8e:ApplicationName>Job</v8e:ApplicationName> <v8e:ApplicationPresentation>Фоновое</v8e:ApplicationPresentation> <v8e:Event>Finish</v8e:Event> <v8e:EventPresentation>Сеанс</v8e:EventPresentation> <v8e:User>Jong Wik</v8e:User> <v8e:UserName>Корот</v8e:UserName> <v8e:Computer>srv-2-srv</v8e:Computer> <v8e:Metadata/> <v8e:MetadataPresentation/> <v8e:Comment/> <v8e:Data xsi:nil="true"/> <v8e:DataPresentation/> <v8e:TransactionStatus>NotApplicable</v8e:TransactionStatus> <v8e:TransactionID/> <v8e:Connection>0</v8e:Connection> <v8e:Session>5146</v8e:Session> <v8e:ServerName/> <v8e:Port>0</v8e:Port> <v8e:SyncPort>0</v8e:SyncPort> </v8e:Event>
Hi I launch a dashboard from another dashboard when I click on the field "Site" /app/spl_pub_dashboard/bib_reg?Site=$click.value$  Now I need to retrieve the field "Site" in the dropdown list of ... See more...
Hi I launch a dashboard from another dashboard when I click on the field "Site" /app/spl_pub_dashboard/bib_reg?Site=$click.value$  Now I need to retrieve the field "Site" in the dropdown list of the destination dashboard and in the same time to keep the site values from "site.csv" Is anybody can help please?   <input type="dropdown" token="Site" searchWhenChanged="true"> <label>Site</label> <fieldForLabel>Site</fieldForLabel> <fieldForValue>Site</fieldForValue> <search> <query>| inputlookup site.csv</query> </search> </input>  
Hello everyone. I'm looking for some assistance with a problem where I get differing search results from what should be the same search. Backstory I’m testing changes to the “ESCU - Malicious Power... See more...
Hello everyone. I'm looking for some assistance with a problem where I get differing search results from what should be the same search. Backstory I’m testing changes to the “ESCU - Malicious PowerShell Process - Execution Policy Bypass – Rule” so that I can filter out known PowerShell events. Using the same search head, user,  date and time range, and what should be two identical macros, I get different search results.   The original search uses this macro: “malicious_powershell_process___execution_policy_bypass_filter” The original search is: | tstats `security_content_summariesonly` values(Processes.process_id) as process_id, values(Processes.parent_process_id) as parent_process_id values(Processes.process) as process min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where Processes.process_name=powershell.exe (Processes.process="* -ex*" OR Processes.process="* bypass *") by Processes.process_id, Processes.user, Processes.dest | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `malicious_powershell_process___execution_policy_bypass_filter`   The test search uses this macro: “malicious_powershell_process___execution_policy_bypass_filter-test” The test search is: | tstats `security_content_summariesonly` values(Processes.process_id) as process_id, values(Processes.parent_process_id) as parent_process_id values(Processes.process) as process min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where Processes.process_name=powershell.exe (Processes.process="* -ex*" OR Processes.process="* bypass *") by Processes.process_id, Processes.user, Processes.dest | `drop_dm_object_name(Processes)` | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `malicious_powershell_process___execution_policy_bypass_filter-test` Both macros contain the same content to exclude Splunk Universal Forwarder PowerShell scripts: search (process!="C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe -executionPolicy RemoteSigned -command \". 'C:\\Program Files\\SplunkUniversalForwarder\\etc\\apps\\Splunk_TA_windows\\bin\\powershell\\nt6-health.ps1'\"" AND process!="C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe -executionPolicy RemoteSigned -command \". 'c:\\Program Files\\SplunkUniversalForwarder\\etc\\apps\\Splunk_TA_windows\\bin\\powershell\\nt6-repl-stat.ps1'\"" AND process!="C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe -executionPolicy RemoteSigned -command \". 'c:\\Program Files\\SplunkUniversalForwarder\\etc\\apps\\Splunk_TA_windows\\bin\\powershell\\nt6-siteinfo.ps1'\"" AND process!="C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe -executionPolicy RemoteSigned -command \". 'C:\\Program Files\\SplunkUniversalForwarder\\etc\\apps\\Splunk_TA_windows\\bin\\powershell\\dns-zoneinfo.ps1'\"" AND process!="C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe -executionPolicy RemoteSigned -command \". 'C:\\Program Files\\SplunkUniversalForwarder\\etc\\apps\\Splunk_TA_windows\\bin\\powershell\\dns-health.ps1'\"")     When I run both searches I get different results and I’m unsure why. The macro appended -test works fine. When I copy its contents to the original macro, that search does not seem to use the new contents. I made these changes last week and today get the same results. Any ideas as to what might be causing this?
Hello, We recently installed Splunk, we thought we had a free license, however we got a notice that we have exceeded the quota and the license has been blocked. We have changed the license group to ... See more...
Hello, We recently installed Splunk, we thought we had a free license, however we got a notice that we have exceeded the quota and the license has been blocked. We have changed the license group to free, however the search is still blocked. How can we unlock it? Thank you very much and greetings!