All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi I want to not display the week end in my chart for example, if i use a time picler range of 7 days, I just want to display Monday to Friday I have to filter the events  with the time_wd like... See more...
hi I want to not display the week end in my chart for example, if i use a time picler range of 7 days, I just want to display Monday to Friday I have to filter the events  with the time_wd like this but it is not really works because as you can see I have no results for saturday but I have results for sunday!         | search (time_h > 6 AND time_h <20) AND NOT (time_wd=6 OR time_wd=7)           could you help please?  
Hi,  I have recently created a splunk-cloud free trial. I then wanted to create a HEC-collector.  I went to : https://prd-p-aaaaa.splunkcloud.com/en-US/manager/launcher/http-eventcollector and adde... See more...
Hi,  I have recently created a splunk-cloud free trial. I then wanted to create a HEC-collector.  I went to : https://prd-p-aaaaa.splunkcloud.com/en-US/manager/launcher/http-eventcollector and added one.  (my id is different)  I received a token aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa i then followed the the documentation here: https://docs.splunk.com/Documentation/Splunk/9.0.1/Data/UsetheHTTPEventCollector to create the url for the HEC-collector. It says to use: <protocol>://http-inputs-<host>.splunkcloud.com:<port>/<endpoint> so from my understanding it should become:  https://http-inputs-prd-p-aaaaa.splunkcloud.com:8088/services/collector/raw  since i want to provide logs over json.  However if i try to curl that url.. i get      curl: (6) Could not resolve host: THE_CONFIGURED_HOST     So, what im asking is what is the correct url to use towards the free-trial HEC-collector?   BR perl
I am creating a table using a search query. I want to show the details of that column value using a dropdown or tooltip (when hovering) over it. For example,  Suppose that table has a column Test a... See more...
I am creating a table using a search query. I want to show the details of that column value using a dropdown or tooltip (when hovering) over it. For example,  Suppose that table has a column Test and has a value PT. When I click on PT it should expand and display Physical Training or when I hover over it, it should show the complete name. Is it possible to to do?
[WinEventLog:Security] disabled = 0 index= win* blacklist1=EventCode="4662" Message=”Accesses:\t\t+(?!Create\sChild)” Is this correct way to filter out event which only have "Create Child" as... See more...
[WinEventLog:Security] disabled = 0 index= win* blacklist1=EventCode="4662" Message=”Accesses:\t\t+(?!Create\sChild)” Is this correct way to filter out event which only have "Create Child" as field value under access? Please let me know if there is any syntax error or any other solution that I can try.
I have a dashboard with different panels, I would like to convert to a savedsearch. This accomplishes two things: Better performance, search can run every 5 minutes, panel refresh at 1 minute. Eve... See more...
I have a dashboard with different panels, I would like to convert to a savedsearch. This accomplishes two things: Better performance, search can run every 5 minutes, panel refresh at 1 minute. Everyone looks at the same data, it sometimes happens that one person sees red while it is green again. How can i do that?  
Hi Team, I am planning to integrate Fireeye HX and Splunk and for the same I have installed the app from Splunk Base "FireEye App for Splunk Enterprise v3 | Splunkbase" on Heavy Forwarder and Searc... See more...
Hi Team, I am planning to integrate Fireeye HX and Splunk and for the same I have installed the app from Splunk Base "FireEye App for Splunk Enterprise v3 | Splunkbase" on Heavy Forwarder and Search Head. Also as mentioned in the document performed the below steps  The HX appliance logging cannot be set from the GUI as of right now, please use the CLI: hostname # logging <remote-IP-address> trap none hostname # logging <remote-IP-address> trap override class cef priority info hostname # write mem On internal index I could see the below error and logs are not reflecting on Splunk ERROR SearchOperator:kv [17796 TcpChannelThread] - Cannot compile RE \"<malware\sname=\"(?<malware_name>[\w-\.]{1,30})\"\s*(sid=\"(?<malware_sid>\d*)")?\s*(stype=\"(?<malware_stype>[\w-]{1,30})\")?\" for transform 'EXTRACT-malware-info_for_fireeye': Regex: invalid range in character class. Any assistance for this issue will be much appreciated
Hi Team, Thanks in advance, Need a quick help in Regex query, Input values:  KUL6LJBJ62YD BLR6LC7BLNJR HRI6M5G6KKPH KUL6LJ3N0F6J HRI6LBJKRHHR HRI6LB65G6NF   Expected output:  First... See more...
Hi Team, Thanks in advance, Need a quick help in Regex query, Input values:  KUL6LJBJ62YD BLR6LC7BLNJR HRI6M5G6KKPH KUL6LJ3N0F6J HRI6LBJKRHHR HRI6LB65G6NF   Expected output:  First 3 character of each phrase.   CUrrent Regex : (?<SITE_NAME>[^\W]{3})    << BUT AM not getting proper output>> Expected Output : | table SITE_NAME KUL BLR HRI KUL HRI HRI   Thanks Jerin V
I have enabled several correlation searches in ES. Those search run normally and return result as expected if I search them manually However, those searches are not running as schedule and never sho... See more...
I have enabled several correlation searches in ES. Those search run normally and return result as expected if I search them manually However, those searches are not running as schedule and never show up if I search using "index=_internal sourcetype=scheduler". Also, their statistics in "Content Management" page suggest that they have been never triggered. Do you have any suggestion on this issue??? 
Hi, I'm getting error when trying to send email. command="sendemail", [Errno -2] Name or service not known while sending mail to: user@domain.com Please suggest how to resolve this.
Dears,   We need your support to convert below search to tstats search. (index=os_windows OR index=workstation*) tag=authentication user!=*$ action=success EventCode=4624 Logon_Type=10 OR Logon... See more...
Dears,   We need your support to convert below search to tstats search. (index=os_windows OR index=workstation*) tag=authentication user!=*$ action=success EventCode=4624 Logon_Type=10 OR Logon_Type=2 user=admin OR user=administrator OR user=Paradmin OR user=symadmin | table _time index user Source_Network_Address Workstation_Name action Logon_Type | dedup user Workstation_Name   Please your support.   Best Regards,  
Hi, i have a duration in seconds and want to convert it to days, hours and minutes. The additional seconds should be just cut off in the output. Ideally there should be no leading zeros (not "04 hour... See more...
Hi, i have a duration in seconds and want to convert it to days, hours and minutes. The additional seconds should be just cut off in the output. Ideally there should be no leading zeros (not "04 hours" but "4 hours") and if days, hours or minutes is 0 they should not be displayed in the output. Examples: Duration in seconds Output 14400 "4 hours" 14432 "4 hours" 604800 "7 days" 1800 "30 minutes" 108002 "1 day 6 hours"
Hi, We have been using EUM for our Portal site. It has been helpful to see any slowness issues from the user side.  We have some Users complaining about the slowness of our ERP Oracle system, but... See more...
Hi, We have been using EUM for our Portal site. It has been helpful to see any slowness issues from the user side.  We have some Users complaining about the slowness of our ERP Oracle system, but we can't see any slowness from the system side. My question is can EUM work on the ERP Oracle system,  AppDynamics is already monitoring ERP system app using Java agent.
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add cus... See more...
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add custom code that will be triggered by any addition of new artifacts.     Regards Sujoy
Evenid monitoring--> Need to get all  the event Id details to splunk used below stanza is and is not getting data n Please help  [WinEventLog://Setup] checkpointInterval = 5 current_only = 0 ... See more...
Evenid monitoring--> Need to get all  the event Id details to splunk used below stanza is and is not getting data n Please help  [WinEventLog://Setup] checkpointInterval = 5 current_only = 0 disabled = 0 whitelist1 = 1,2,3,4 index = sag_windows_normal ignoreOlderThan = 7d sourcetype = WinEventLog:Setup [WinEventLog://Application] checkpointInterval = 5 current_only = 0 disabled = 0 whitelist = * index = sag_windows_normal ignoreOlderThan = 7d sourcetype = WinEventLog:Application [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 whitelist1 = * index = sag_windows_normal ignoreOlderThan = 7d sourcetype = WinEventLog:System
Hi, I have SPL which includes just using bunch of lookups and producting following data: _time turnaround_time diff_time customer product_to product_from 2022-06-30 04:04:43.3... See more...
Hi, I have SPL which includes just using bunch of lookups and producting following data: _time turnaround_time diff_time customer product_to product_from 2022-06-30 04:04:43.399 2022-06-30 04:12:53.556 490.156810 nike cat dog 2022-07-07 05:15:14.209 2022-07-07 05:31:22.881  968.671302 adidas bear   cat I have got another lookup jira_data.csv which contains Jira data associated with it: Ticket customer Summary Status Created Resolved Updated COW-245 nike customer complaining open 2022-06-30 03:04:43.399 - 2022-06-30 03:21:43.399 COW-456 nike product change closed 2022-06-30 02:04:43.399  2022-06-30 07:04:43.399 2022-06-30 07:20:43.399   I am attempting to do follow: Use turnaround_time and lookup in the jira_data.csv and find all jiras if turnaround_time is around 2h back or front of Resolved.  In this example I am expecting COW-456 as an output.
Running a Windows 2012 R2 DHCP Server with UF 9.0.1 and Splunk Enterprise 8.0.5. My inputs at the UF look like this:   [default] index = windowsdhcp _TCP_ROUTING = prod [WinEventLog://System] st... See more...
Running a Windows 2012 R2 DHCP Server with UF 9.0.1 and Splunk Enterprise 8.0.5. My inputs at the UF look like this:   [default] index = windowsdhcp _TCP_ROUTING = prod [WinEventLog://System] start_from = oldest disabled = 0 current_only = 0 whitelist1 = SourceName="DhcpServer" whitelist2 = SourceName="Dhcp-Server" [WinEventLog://DHCPAdminEvents] start_from = oldest disabled = 0   My issue is that the whitelisted events in the 1st stanza are not getting processed to the indexer. If I review the XML of the events in the Windows Event Viewer: These events are collected and indexed:   - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Microsoft-Windows-DHCP-Server" Guid="{6D64F02C-A125-4DAC-9A01-F0555B41CA84}" /> <EventID>20251</EventID> <Version>0</Version> <Level>4</Level> <Task>121</Task> <Opcode>106</Opcode> <Keywords>0x2000000000000000</Keywords> <TimeCreated SystemTime="2022-10-29T12:25:40.655052000Z" /> <EventRecordID>161</EventRecordID> <Correlation /> <Execution ProcessID="3884" ThreadID="4472" /> <Channel>DhcpAdminEvents</Channel> <Computer>dhcp-srv-a.mydomain.com</Computer> <Security UserID="S-1-5-20" /> </System> - <EventData> <Data Name="Server">dhcp-srv-b.mydomain.com</Data> <Data Name="RelationName">dhcp-srv-b.mydomain.com-dhcp-srv-a.mydomain.com</Data> <Data Name="OldState">COMMUNICATION_INT</Data> <Data Name="NewState">NORMAL</Data> </EventData> </Event>   These events do not get captured (Note: event is in classic format):   Log Name: System Source: Microsoft-Windows-DHCP-Server Date: 14/11/2022 23:11:37 Event ID: 1376 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: dhcp-srv-a.mydomain.com Description: IP address range of scope 10.119.6.0 is 89 percent full with only 6 IP addresses available. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-DHCP-Server" Guid="{6D64F02C-A125-4DAC-9A01-F0555B41CA84}" EventSourceName="DhcpServer" /> <EventID Qualifiers="0">1376</EventID> <Version>0</Version> <Level>3</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2022-11-14T23:11:37.000000000Z" /> <EventRecordID>87097</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>dhcp-srv-a.mydomain.com</Computer> <Security /> </System> <EventData> <Data>10.119.6.0</Data> <Data>89</Data> <Data>6</Data> </EventData> </Event>     I can't see why it is not collecting the second event via the 1st stanza?
Hello, For the past week I've been working in a way to run some queries for a report about vulnerability findings. I have made a lookup table for the vulnerability details and I call that to the ma... See more...
Hello, For the past week I've been working in a way to run some queries for a report about vulnerability findings. I have made a lookup table for the vulnerability details and I call that to the main query to do the work. However, I'm currently having a bit of trouble trying to figure out the scheduled query to run in order to update the vulnerabilities details lookup table. Since Rapid 7 sometimes doesn't import well their vulnerability definitions to splunk (i.e: there are 270000 lines but for some reason, some day only 12000 gets imported into splunk) I wanted to make some validations before deciding to run  the outputlookup to update the table. To do this I had deviced this so far:     index=rapid7 sourcetype="rapid7:insightvm:vulnerability_definition" | dedup id | lookup soc_vulnerabilities.csv vulnerability_id OUTPUT vulnerability_id title description | stats count as today | append [| inputlookup soc_vulnerabilities.csv | stats count as yesterday] | eval prov=yesterday | eval conditional=if(today>=yesterday,1,0) | table conditional, today, yesterday, prov     As you can see, all I'm doing is validating if the amount of lines being imported to splunk are the same or greater than the current amount of lines stored in the lookup table. Thing is, the eval with the conditional isnt working because both total values are being shown as if they were unrelated, which they kind of are. The result table is as follows: conditional today yesterday prov 0 238732     0   238732 238732 What I want is to compare both today and yesterday values in order to determine if the lookup table should or should not be updated. I've been looking at the documentation for a way to make it work and also checked some other posts here in the forums but I haven't found a similar case. I hope it's not because it is impossible, nevertheless, I'd appreciate if you guys could help me to figure this out or should I try to solve this problem from other perspective. Additional info: For those who have worked with this logs before, vulnerability_id field in that sourcetype doesn't exists, so we created it via CLI in the normalization options thing. Thanks in advance.
Hello, The other day our ITOC team received that alert [Splunk Monitoring] Check Failed: Gift Card Virtual - PROD - KB0012356 - [Step 0][Go To URL] net::ERR_NAME_NOT_RESOLVED I have checked KB001... See more...
Hello, The other day our ITOC team received that alert [Splunk Monitoring] Check Failed: Gift Card Virtual - PROD - KB0012356 - [Step 0][Go To URL] net::ERR_NAME_NOT_RESOLVED I have checked KB0012356 and it does not exist. Any chance I can get the info about how to troubleshoot this?   Thank you, Laura.
I am getting conflicting information, so I just wanted to ask. If you need to create a new field alias that would be in two sourcetypes, do you need to create two different field aliases or just one.
I am trying to correlate authentication attempts [ index_A (username, role) vs index_B (username, authentication_time) ] I want users returned from index_A who dont show up in index_B over last ce... See more...
I am trying to correlate authentication attempts [ index_A (username, role) vs index_B (username, authentication_time) ] I want users returned from index_A who dont show up in index_B over last certain number of days (ex. 14 days) To word it better, unique fields from index_A (which live in index_A) which can show up in index_B but I want to list the ones that dont show up  My current solutions is piping a search between the 2 indexes over a "username" field but that lists all the matching items and not unique items from index_A which are not in index_B