All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

i'm having a thought , that a text input box which has a dropdown and also able to enter text input with a single token, i am not sure this will works , needed guidance Thanks in Advance. Sanja... See more...
i'm having a thought , that a text input box which has a dropdown and also able to enter text input with a single token, i am not sure this will works , needed guidance Thanks in Advance. Sanjai S
Dear Splunkers, I would like to ask your feedback on following issues with the Service Now add-on app. The problem is that I´m not able display settings for the Add-on where I need to select differ... See more...
Dear Splunkers, I would like to ask your feedback on following issues with the Service Now add-on app. The problem is that I´m not able display settings for the Add-on where I need to select different Service now accounts configured succefully.   What it should look like is following (taken from different project): This is where I can choose my preferred account and configure details.  Can you suggest what could be the reason not to see these settings with my admin user role account ? Thanks in advance. BR
Hi @Symon , don't use join because it's a very slow search, use stats in this way: index=fortigate sourcetype IN (fgt_event, fgt_traffic ) | eval src=coalesce(src,assignip) | stats values(srcpo... See more...
Hi @Symon , don't use join because it's a very slow search, use stats in this way: index=fortigate sourcetype IN (fgt_event, fgt_traffic ) | eval src=coalesce(src,assignip) | stats values(srcport) AS srcport values(dest) AS dest values(destport) AS destport BY user src If you want only the events present in both the sourcetypes, you have to add an additional condition: index=fortigate sourcetype IN (fgt_event, fgt_traffic ) | eval src=coalesce(src,assignip) | stats values(srcport) AS srcport values(dest) AS dest values(destport) AS destport dc(sourcetype) AS sourcetype_count BY user src | where sourcetype_count=2 | fields - sourcetype_count Ciao. Giuseppe
I did not get results..  I have to calculate average time for closing alerts by severity, so in this case I am calculting "medium" severity alerts, so the equation should be total medium alerts / ... See more...
I did not get results..  I have to calculate average time for closing alerts by severity, so in this case I am calculting "medium" severity alerts, so the equation should be total medium alerts / total time of "closing" medium alerts  
Hello, I have started a Cloud Trial to create a test environment for a connector that I wanted to test for a customer. This connector requires additional ports to be opened to allow data ingestion f... See more...
Hello, I have started a Cloud Trial to create a test environment for a connector that I wanted to test for a customer. This connector requires additional ports to be opened to allow data ingestion from Azure Event Hub. This should be configures using the ACS API. I've enabled the token authentication from the portal, and I generated a new token. I then try to configure Postman to use the API, and I've setup a new request to test the API access:    https://admin.splunk.com/{{stack}}/adminconfig/v2/status   Where {{stack}} represents my instance name defined at the collection level, and bearer token configured in the authorization tab. However, when executing the request, it loops for approximately 30 seconds to a minute before resulting in the following error message:     { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=426a14b3-97e3-968a-a924-f3abc4300795). Please refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." }   Despite my efforts, this error has persisted for over 24 hours, and I have no ideas about what can be the issue root cause. Could anyone advice on how to address this issue and successfully configure the necessary settings? Any assistance would be greatly appreciated. Thank you.
Hi all, I would like to visualize a person's schedule as well as show the moment of when events took place. The visualization is to make apparent whether the events took place within or outside the ... See more...
Hi all, I would like to visualize a person's schedule as well as show the moment of when events took place. The visualization is to make apparent whether the events took place within or outside the person's working hours. I'm stumped at how to tackle this.  Anyone know which visualization type to use? Perhaps also any pointers on how to prepare the data?  Data example: EmployeeID    work_from      work_until      event_timestamp 123                    08:00                 17:00                16:30 123                    08:00                 17:00                01:00 Below is a quick sketch of what I would like to end up with. The green bars show the working hours, so on Monday this person is working from 14:00 - 24:00 and has an event at 23:00. On Tuesday the person is not working but still has 3 events.
Alright, For those having the same problem and monitoring evolutions in replies, here is the tip: 1. Remove the Add-on for ARM64 + assert from shell (no PSC add-on when ls -l of $SPLUNK_HOME/etc/... See more...
Alright, For those having the same problem and monitoring evolutions in replies, here is the tip: 1. Remove the Add-on for ARM64 + assert from shell (no PSC add-on when ls -l of $SPLUNK_HOME/etc/apps) 2. Install the Add-on for x86_64 3. Restart the Splunk instance (UI or CLI) The problem was simply that for M2-running Splunk instance (using Rosetta 2) necessitates x86_64 libraries for Python. Installing the ARM64 ones don't help. BUT installing the x86_64 without removing first the ARM64 won't help. I hope this will help other M2-Splunk fellows. Good luck 
As an MSSP overseeing 10 distinct Splunk customers, I'm on the lookout for a solution that allows me to efficiently monitor license usage and memory consumption across all clients. Since these cus... See more...
As an MSSP overseeing 10 distinct Splunk customers, I'm on the lookout for a solution that allows me to efficiently monitor license usage and memory consumption across all clients. Since these customers don't communicate with each other, I'm considering setting up a dedicated Splunk instance to collect and consolidate these logs. Any recommendations for apps that can help achieve this seamlessly, or perhaps an alternative approach that you've found effective in similar scenarios? Your insights would be greatly appreciated! Thanks in advance. #SplunkMonitoring #MSSPChallenges #splunkenterprise #monitoringConsole
Hello everyone. I experienced a cyberattack on my computer, and the Avast Firewall detected and alerted me to pop-up messages. I intend to report the incident to the police for investigation, and the... See more...
Hello everyone. I experienced a cyberattack on my computer, and the Avast Firewall detected and alerted me to pop-up messages. I intend to report the incident to the police for investigation, and they require me to have a log file containing details of this attack. The hard drive's Windows boot trail has been corrupted, leaving me only able to access the folders and files as an external drive. I need assistance in locating the correct folder and file of the cyber attack evidence. I possess a screenshot displaying the Avast warning message, which occurred on January 4, 2024. thank you  Daniel 
@glc_slash_it custom functions don't pass out information in the same way an action would via action_results. You can't filter on CF outputs in the same way unfortunately.  Either; 1. Do the filter... See more...
@glc_slash_it custom functions don't pass out information in the same way an action would via action_results. You can't filter on CF outputs in the same way unfortunately.  Either; 1. Do the filtering in the CF and pass out only what you need. 2. Use another Code Block to do the additional understanding and pass out a list. 3. Convert your function to an app action to take advantage of the action_results capability then you can split them with filters/decisions.   -- I Hope this helped, if so please mark as a solution for others asking the same question! Happy SOARing! --
I have the index=fortigate and there are two sourcetypes ("fgt_event" and "fgt_traffic"). index=fortigate sourcetype=fgt_event |stats count by user, assignip user assignip john 192.168.1.... See more...
I have the index=fortigate and there are two sourcetypes ("fgt_event" and "fgt_traffic"). index=fortigate sourcetype=fgt_event |stats count by user, assignip user assignip john 192.168.1.1 paul 192.168.1.2   index=fortigate soucetype=fgt_traffic | stats count by src srcport dest destport src srcport dest destport 192.168.1.1 1234 10.0.0.1 22 192.168.1.2 4321 10.0.0.2 22 I want to correlate the result like_ user src (or) assignip srcport dest destport john 192.168.1.1 1234 10.0.0.1 22 paul 192.168.1.2 4321 10.0.0.2 22 I have learned SPL query like join,  mvappend, coalesce, subsearch ,etc. I tried a lot by combining the SPL functions to output. It doesn't still working. Please help me. Thanks.
Hello I'm trying to pass a list of dicts from a "custom code block" into a "filter block", to run either ip_lookup, hash_lookup, or both sub-playbooks based on the indicator type. For example: i... See more...
Hello I'm trying to pass a list of dicts from a "custom code block" into a "filter block", to run either ip_lookup, hash_lookup, or both sub-playbooks based on the indicator type. For example: ioc_list = [     {         "ioc": "2.2.2.2",         "type": "ip"     },     {         "ioc": "1.1.1.1",         "type": "ip"     },     {         "ioc": "ce5761c89434367598b34f32493b",         "type": "hash"             } ]   And then on filter I have: if get_indicators:custom_function:ioc_list.*.type == ip     run -> ip_lookup sub-playbook if get_indicators:custom_function:ioc_list.*.type == hash     run -> hash_lookup sub-playbook     And it looks like the filter does half of the job, because it can route to the proper sub-playbook(s), but instead of forwarding only the elements that match the conditions, it simply forwards all elements.     Expected output: filtered-data on condition_1 route:  [ {         "ioc": "2.2.2.2",         "type": "ip"     },     {         "ioc": "1.1.1.1",         "type": "ip" }]   filtered-data on condition_2 route:  [{         "ioc": "ce5761c89434367598b34f32493b",         "type": "hash"         }]   Actual output on both condition routes: [{         "ioc": "2.2.2.2",         "type": "ip"     },     {         "ioc": "1.1.1.1",         "type": "ip"     },     {         "ioc": "ce5761c89434367598b34f32493b",         "type": "hash" }]     Even though this seems a specific question, is also part of a broad miss-understanding of how custom code blocks and filter interact with each other.   Hope some one can enlighten me in the correct path. Thanks  
Hi @allidoiswinboom , where do you located these conf files? they must be located in the first full Splunk instance that data passing through. In other words, if you're using an Heavy forwarder to... See more...
Hi @allidoiswinboom , where do you located these conf files? they must be located in the first full Splunk instance that data passing through. In other words, if you're using an Heavy forwarder to take these logs, you have to put them in the HFs, if instead you're using a Universal Forwarder, you have to locate them on Indexers or on an eventual intermediate HF. Ciao. Giuseppe
Hi @Harish2 , you have to exclude from results the days and the hours, something like this: | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-... See more...
Hi @Harish2 , you have to exclude from results the days and the hours, something like this: | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | eval day=strftime(_time, "%d"), hour=strftime(_time, "%H") | where NOT (day IN (10, 18, 25) OR hour<8 OR hour>11) | fields - _time day hour You can also check weekends and holydays, using using a lookup containing the holydays, e.g.: date type 2024-03-29 ferial 2024-03-30 weekend 2024-04-31 weekend 2024-04-01 holyday 2024-04-02 ferial 2024-04-03 ferial 2024-04-04 ferial 2024-04-05 weekend 2024-04-06 weekend 2024-04-07 ferial and running something like this: | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | eval day=strftime(_time, "%d"), hour=strftime(_time, "%H"), date=strftime(_time, "%Y-%m-%d |lookup calendsr.csv date OUTPUT type | where NOT (day IN (10, 18, 25) OR hour<8 OR hour>11 OR type IN ("weekend", "holyday")) | fields - _time day hour Ciao. Giuseppe
Yeah, but`edit_modinput_identity_manager` capability is already checked in my role cause I'm administrator, but I still see this error.  
Thanks for the tip.  I'm afraid I don't quite know what to ingest on Mac or how to do it right especially if it should be shipped to another instance.  The problematic one is a work computer that is ... See more...
Thanks for the tip.  I'm afraid I don't quite know what to ingest on Mac or how to do it right especially if it should be shipped to another instance.  The problematic one is a work computer that is connected to corporate VPN (but will disconnect from time to time) and runs a bunch of corporate "security stuff" like MS Defender.
@DaisyNguyen  Hey, Ensure that you have the necessary permissions. You need the "edit_modinput_identity_manager" capability to use the Asset and Identity Management interface. Verify your role assign... See more...
@DaisyNguyen  Hey, Ensure that you have the necessary permissions. You need the "edit_modinput_identity_manager" capability to use the Asset and Identity Management interface. Verify your role assignments and permissions in Splunk.  Manage assets and identities in Splunk Enterprise Security - Splunk Documentation Configure users and roles - Splunk Documentation
Assuming you make a lookup called silence.csv with the following: exclude_days silence_start silence_end 10,18,25 8 11 you can do something liie | tstats count max(_time) as _time ```... See more...
Assuming you make a lookup called silence.csv with the following: exclude_days silence_start silence_end 10,18,25 8 11 you can do something liie | tstats count max(_time) as _time ``` you will need latest _time ``` where index=app-idx host="*abfd*" sourcetype=app-source-logs by host | eval dom = strftime(_time, "%m"), hod = strftime(_time, "%H") | append [inputlookup exclusions.csv | eval exclude_days = split(exclude_days, ",")] | eventstats values(exclude_days) as exclude_days values(silence_*) as silent_* | where NOT dom IN exclude_days AND (silence_start > hod OR hod > silence_end) ``` whether AND or OR depends on exact semantic ``` | fields - _time dom hod But note if you have lots of hosts with alert, this can get slow.  This is a method that is flexible to implement either of possible intentions in a similar fashion.  If performance becomes a problem, you will need optimize for the exact intent.
Splunk SOAR (On-premises) installs with a default license, the Community License. The Community License is limited to: 100 licensed actions per day 1 tenant 5 cases in the New or Open states ... See more...
Splunk SOAR (On-premises) installs with a default license, the Community License. The Community License is limited to: 100 licensed actions per day 1 tenant 5 cases in the New or Open states If the quota of 5 cases is already reached, will I still be assigned new cases? Alternatively, can new cases only be assigned once the previous 5 cases have been resolved?
@bowesmana @yuanliu  I think I just figured it out using math (modulo)  Please let me know what you think..   Thank you for your help For Tuesday           | where (_time % 604800) = 450000 For... See more...
@bowesmana @yuanliu  I think I just figured it out using math (modulo)  Please let me know what you think..   Thank you for your help For Tuesday           | where (_time % 604800) = 450000 For Wednesday    | where (_time % 604800) = 536400 And so on.. See this table, there is a pattern for each day of the week Day time _time Student MathGrade mod 604800 Thursday 2/8/24 5:00 AM 1707368400 Student1 10 18000 Friday 2/9/24 5:00 AM 1707454800 Student1 9 104400 Saturday 2/10/24 5:00 AM 1707541200 Student1 8 190800 Sunday 2/11/24 5:00 AM 1707627600 Student1 7 277200 Monday 2/12/24 5:00 AM 1707714000 Student1 6 363600 Tuesday 2/13/24 5:00 AM 1707800400 Student1 5 450000 Wednesday 2/14/24 5:00 AM 1707886800 Student1 6 536400 Thursday 2/15/24 5:00 AM 1707973200 Student1 7 18000 Friday 2/16/24 5:00 AM 1708059600 Student1 8 104400 Saturday 2/17/24 5:00 AM 1708146000 Student1 9 190800 Sunday 2/18/24 5:00 AM 1708232400 Student1 10 277200 Monday 2/19/24 5:00 AM 1708318800 Student1 10 363600 Tuesday 2/20/24 5:00 AM 1708405200 Student1 9 450000 Wednesday 2/21/24 5:00 AM 1708491600 Student1 8 536400 Thursday 2/22/24 5:00 AM 1708578000 Student1 7 18000 Friday 2/23/24 5:00 AM 1708664400 Student1 6 104400 Saturday 2/24/24 5:00 AM 1708750800 Student1 5 190800 Sunday 2/25/24 5:00 AM 1708837200 Student1 6 277200 Monday 2/26/24 5:00 AM 1708923600 Student1 7 363600 Tuesday 2/27/24 5:00 AM 1709010000 Student1 8 450000 Wednesday 2/28/24 5:00 AM 1709096400 Student1 9 536400 Thursday 2/29/24 5:00 AM 1709182800 Student1 10 18000 Friday 3/1/24 5:00 AM 1709269200 Student1 10 104400 Saturday 3/2/24 5:00 AM 1709355600 Student1 9 190800 Sunday 3/3/24 5:00 AM 1709442000 Student1 8 277200