All Topics

Top

All Topics

Hi all, I would like to visualize a person's schedule as well as show the moment of when events took place. The visualization is to make apparent whether the events took place within or outside the ... See more...
Hi all, I would like to visualize a person's schedule as well as show the moment of when events took place. The visualization is to make apparent whether the events took place within or outside the person's working hours. I'm stumped at how to tackle this.  Anyone know which visualization type to use? Perhaps also any pointers on how to prepare the data?  Data example: EmployeeID    work_from      work_until      event_timestamp 123                    08:00                 17:00                16:30 123                    08:00                 17:00                01:00 Below is a quick sketch of what I would like to end up with. The green bars show the working hours, so on Monday this person is working from 14:00 - 24:00 and has an event at 23:00. On Tuesday the person is not working but still has 3 events.
As an MSSP overseeing 10 distinct Splunk customers, I'm on the lookout for a solution that allows me to efficiently monitor license usage and memory consumption across all clients. Since these cus... See more...
As an MSSP overseeing 10 distinct Splunk customers, I'm on the lookout for a solution that allows me to efficiently monitor license usage and memory consumption across all clients. Since these customers don't communicate with each other, I'm considering setting up a dedicated Splunk instance to collect and consolidate these logs. Any recommendations for apps that can help achieve this seamlessly, or perhaps an alternative approach that you've found effective in similar scenarios? Your insights would be greatly appreciated! Thanks in advance. #SplunkMonitoring #MSSPChallenges #splunkenterprise #monitoringConsole
Hello everyone. I experienced a cyberattack on my computer, and the Avast Firewall detected and alerted me to pop-up messages. I intend to report the incident to the police for investigation, and the... See more...
Hello everyone. I experienced a cyberattack on my computer, and the Avast Firewall detected and alerted me to pop-up messages. I intend to report the incident to the police for investigation, and they require me to have a log file containing details of this attack. The hard drive's Windows boot trail has been corrupted, leaving me only able to access the folders and files as an external drive. I need assistance in locating the correct folder and file of the cyber attack evidence. I possess a screenshot displaying the Avast warning message, which occurred on January 4, 2024. thank you  Daniel 
I have the index=fortigate and there are two sourcetypes ("fgt_event" and "fgt_traffic"). index=fortigate sourcetype=fgt_event |stats count by user, assignip user assignip john 192.168.1.... See more...
I have the index=fortigate and there are two sourcetypes ("fgt_event" and "fgt_traffic"). index=fortigate sourcetype=fgt_event |stats count by user, assignip user assignip john 192.168.1.1 paul 192.168.1.2   index=fortigate soucetype=fgt_traffic | stats count by src srcport dest destport src srcport dest destport 192.168.1.1 1234 10.0.0.1 22 192.168.1.2 4321 10.0.0.2 22 I want to correlate the result like_ user src (or) assignip srcport dest destport john 192.168.1.1 1234 10.0.0.1 22 paul 192.168.1.2 4321 10.0.0.2 22 I have learned SPL query like join,  mvappend, coalesce, subsearch ,etc. I tried a lot by combining the SPL functions to output. It doesn't still working. Please help me. Thanks.
Hello I'm trying to pass a list of dicts from a "custom code block" into a "filter block", to run either ip_lookup, hash_lookup, or both sub-playbooks based on the indicator type. For example: i... See more...
Hello I'm trying to pass a list of dicts from a "custom code block" into a "filter block", to run either ip_lookup, hash_lookup, or both sub-playbooks based on the indicator type. For example: ioc_list = [     {         "ioc": "2.2.2.2",         "type": "ip"     },     {         "ioc": "1.1.1.1",         "type": "ip"     },     {         "ioc": "ce5761c89434367598b34f32493b",         "type": "hash"             } ]   And then on filter I have: if get_indicators:custom_function:ioc_list.*.type == ip     run -> ip_lookup sub-playbook if get_indicators:custom_function:ioc_list.*.type == hash     run -> hash_lookup sub-playbook     And it looks like the filter does half of the job, because it can route to the proper sub-playbook(s), but instead of forwarding only the elements that match the conditions, it simply forwards all elements.     Expected output: filtered-data on condition_1 route:  [ {         "ioc": "2.2.2.2",         "type": "ip"     },     {         "ioc": "1.1.1.1",         "type": "ip" }]   filtered-data on condition_2 route:  [{         "ioc": "ce5761c89434367598b34f32493b",         "type": "hash"         }]   Actual output on both condition routes: [{         "ioc": "2.2.2.2",         "type": "ip"     },     {         "ioc": "1.1.1.1",         "type": "ip"     },     {         "ioc": "ce5761c89434367598b34f32493b",         "type": "hash" }]     Even though this seems a specific question, is also part of a broad miss-understanding of how custom code blocks and filter interact with each other.   Hope some one can enlighten me in the correct path. Thanks  
Splunk SOAR (On-premises) installs with a default license, the Community License. The Community License is limited to: 100 licensed actions per day 1 tenant 5 cases in the New or Open states ... See more...
Splunk SOAR (On-premises) installs with a default license, the Community License. The Community License is limited to: 100 licensed actions per day 1 tenant 5 cases in the New or Open states If the quota of 5 cases is already reached, will I still be assigned new cases? Alternatively, can new cases only be assigned once the previous 5 cases have been resolved?
Hi all, I'm struggling with problem that I can't find any error logs in Asset and Identity Management dashboard in Splunk Enterprise Security. It shows NOT FOUND and I see the error message behind i... See more...
Hi all, I'm struggling with problem that I can't find any error logs in Asset and Identity Management dashboard in Splunk Enterprise Security. It shows NOT FOUND and I see the error message behind is "You need edit_modinput_manager capability to edit information." . But I'm an admin that already have this permission.  Hope anyone can tell me how can I fix this error.  Thank you.   
We had a problem with our syslog server and a bunch of data went missing in the ingest. The problem was actually caused by the UF not being able to keep up with the volume of logs before the logrotat... See more...
We had a problem with our syslog server and a bunch of data went missing in the ingest. The problem was actually caused by the UF not being able to keep up with the volume of logs before the logrotate process compressed the files, making them unreadable. I caught this in progress and began making copies of the local files so that they would not get rotated off the disk. I am looking for a way to put them back into the index in the correct place in _time. I thought it would be easy but it is turning out harder than I expected.  I have tried making a Monitor inputs for a local file, and cat/printf the log file into the monitored file. I have also tried to use the "add oneshot" cli command, neither way has gotten me what I am wanting. The monitored file kind of works, and I think I could probably make it better given some tweeking.  The "add oneshot" command actually works very well and it is the first time I am learning about this useful command. My problem I believe is that the sourcetype I am using is not working as intended. I can get data into the index using the oneshot command and it looks good, as far as breaks the lines into events, etc. The problem I am seeing is all the parsing rules that are included with the props/transforms in the Splunk_TA_paloalto addon are not being applied effectively. Splunk is parsing some fields but I suspect it is guessing based on the format of the data. When I look at the props.conf for the TA, I see it uses a general stanza called [pan_log] but inside the config will transform the sourcetype into a variety of different sourcetypes based on the type of log in the file (there is at least 6 possibilities).   TRANSFORMS-sourcetype = pan_threat, pan_traffic, pan_system, pan_config, pan_hipmatch, pan_correlation, pan_userid, pan_globalprotect, pan_decryption   When I use the oneshot command, the data goes into the index and I can find it by specifying the source, but none of this transforms is happening, so the logs are not separated into the final sourcetypes.  Has anybody ran into a problem like this and know a way to make it work? Or have any other tips that I can try to make some progress on this? One thing I was thinking is that the Splunk_TA_paloalto addon is located on the indexers, but not on the server that has the files that I am doing the oneshot comamnd from. I expected this would all be happening on the indexer tier, but maybe I need to add it locally so splunk knows how to handle the data.  Any ideas?
Why can I not find any documentation on Firewall Investigator module of Splunk Enterprise?
Hello all,  I wanted to share my recently published Splunk extension for Chrome and Edge. This extension is free and enables several features: Code commenting with Ctrl + / Comment collapsing/fol... See more...
Hello all,  I wanted to share my recently published Splunk extension for Chrome and Edge. This extension is free and enables several features: Code commenting with Ctrl + / Comment collapsing/folding Saving and retrieving queries Inline command help You can check it out at Splunk Search Assistant (google.com) Julio
Is there a way to change the _time field of imported data to be a custom extracted datetime field? Or at least some way to specify a different field used by the time picker? I have seen some so... See more...
Is there a way to change the _time field of imported data to be a custom extracted datetime field? Or at least some way to specify a different field used by the time picker? I have seen some solutions use props.conf but I am on Splunk Cloud 
Netapp products whch are running DataONTAP are being transitioned from ZAPI to REST.  Support for ZAPI wil be dropped in future OnTAP releases. Since the Splunk TA uses ZAPI, does it also support RE... See more...
Netapp products whch are running DataONTAP are being transitioned from ZAPI to REST.  Support for ZAPI wil be dropped in future OnTAP releases. Since the Splunk TA uses ZAPI, does it also support REST?  If it does not currently use REST, are there plans to deliver a future version which does? Thanks.
|tstats count where index=app-idx host="*abfd*" sourcetype=app-source-logs by host This is my alert query, i want to modify the query so that i wont receive alert at certain times. For example: Eve... See more...
|tstats count where index=app-idx host="*abfd*" sourcetype=app-source-logs by host This is my alert query, i want to modify the query so that i wont receive alert at certain times. For example: Every month like on 10 , 18, 25 and during 8am to 11am i don't want to get the alerts. Rest all for other days its should work as normal. how can i do it???
When writing regex, where in the regex string am I supposed to add the (?<new_field>) string ? I have included a sample regex string below, where in this string would I add (?<new_field>) ? (?<=\:\... See more...
When writing regex, where in the regex string am I supposed to add the (?<new_field>) string ? I have included a sample regex string below, where in this string would I add (?<new_field>) ? (?<=\:\[)(.*)(?=\]) Thanks !
AppDynamics recently published Security Advisories regarding two medium-severity vulnerabilities in the AppDynamics Controller: Cisco AppDynamics Controller Path Traversal Vulnerability (CVE-2024... See more...
AppDynamics recently published Security Advisories regarding two medium-severity vulnerabilities in the AppDynamics Controller: Cisco AppDynamics Controller Path Traversal Vulnerability (CVE-2024-20345) Cisco AppDynamics Controller Cross Site Scripting Vulnerability (CVE-2024-20346) You can find details and guidance for these in the public Security Advisories at the links above, and in the documentation under Product Announcements and Alerts.
Hi All! Hope all is well. I am about to pull my hair out trying to override a sourcetype for a specific set of tcp network events. The event starts with the same string of 'acl_policy_name' and it ... See more...
Hi All! Hope all is well. I am about to pull my hair out trying to override a sourcetype for a specific set of tcp network events. The event starts with the same string of 'acl_policy_name' and it is currently being labeled with a sourcetype of 'f5:bigip:syslog'. I want to override that sourcetype with a new one labeled 'f5:bigip:afm:syslog' however, even after modifying the props and transforms conf files: still no dice. I used regex101 to ensure that the regex for the 'acl_policy_name' match is correct but I've gone through enough articles and Splunk documentation to no avail. Nothing in the btools outputs for it looks out of place or as though it could be interfering with the settings below. Any thoughts or suggestions would be greatly appreciated before I throw my laptop off a cliff. Thanks in advance! Event Snippet: Inputs.conf [tcp://9515] disabled = false connection_host = ip sourcetype = f5:bigip:syslog index = f5_cs_p_p Props.conf [f5:bigip:syslog] TRANSFORMS-afm_sourcetype = afm-sourcetype *Note I also tried [source::tcp:9515] as a spec instead of the sourcetype but no dice either way. Transforms.conf [afm-sourcetype] REGEX = ^acl_policy_name="$ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::f5:bigip:afm:syslog WRITE_META = true    
Hi all, I have been looking at my Splunk CMC for a customer and have noticed that the ingest per day has been up and down since early November, I have had a look at the CMC (cloud monitoring console)... See more...
Hi all, I have been looking at my Splunk CMC for a customer and have noticed that the ingest per day has been up and down since early November, I have had a look at the CMC (cloud monitoring console) but for some tabs, the graphs shown by default won't let me go back to November to find trends such as "daily event count per day in November"   Could someone guide me on why this is & what would be a good place to start on this investigation. For context:   Arch is: UF --> HF --> SC SC4S --> SC Cloud data --> HF --> SC
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted pu... See more...
Hello, I'm currently doing some training as part of a SOC analyst intern position. One of the questions in the little exercise our trainer created for us is this (some information has been omitted purposely in respect to the organization): How many of each user category authentication attempt exist for all successful authentications?     Would someone be able to assist me with a general start for how I  would write up my search to look for this kind of info?
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_... See more...
I am running the following query for a single 24 hour period. I was expecting a single summary row result. Not sure why the result is split across 2 rows. Here's the query: index=federated:license_master_internal source=*license_usage.log type=Usage pool=engineering_pool | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | where like(h, "%metfarm%" ) OR like(h, "%scale%") |eval h=rtrim(h,".eng.ssnsgs.net") |eval env=split(h,"-") |eval env=mvindex(env,1) |eval env=if (like(env,"metfarm%"),"metfarm",env) |eval env=if (like(env,"sysperf%"),"95x",env) |eval env=if(like(env,"gs02"),"tscale",env) | timechart span=1d sum(b) as b by env | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] |addtotals
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover t... See more...
Hi community, I have an AO AG with two nodes, and I have these four IP addresses. 10.10.10.62 (DB 1) 10.10.10.63 (DB 2) 10.10.10.61 (Cluster IP) 10.10.10.60 (AG Listener IP) I want to discover the two nodes automatically. According to the documentation, Configure Microsoft SQL Server Collectors (appdynamics.com) To enable monitoring of all the nodes, you must enable the dbagent.mssql.cluster.discovery.enabled property either at the Controller level or at the agent level. I am running the following: $ nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Agent -jar db-agent.jar -Ddbagent.mssql.cluster.discovery.enabled=true & But it doesn't work when I configure the collector with the AG Listener IP. I also get the below: `Is Failover Cluster Discovery Enabled: False`! I have added dbagent.mssql.cluster.discovery.enabled though!? What could I possibly be doing wrong? Thank you