All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could attach your props to some wildcarded host or source stanza but that's something I'd be very careful about. It's a very non-obvious configuration and can be a huge pain to debug issues.
Ok. Firstly, invest in some punctuation, please, because this stream of conciousness is dificult to read. Secondly, what are you spinning up? You mention server classes so I suspect you're talking a... See more...
Ok. Firstly, invest in some punctuation, please, because this stream of conciousness is dificult to read. Secondly, what are you spinning up? You mention server classes so I suspect you're talking about creating some (virtual? Doesn't matter really) machines with a pre-installed UF. And now what? That UF contains some pre-defined setting, especially including outputs.conf? If it does, then what do you want do "heartbeat"? It's gonna be sending its own internal logs anyway. It is also a fairly typical practice to distribute with your UF a kind of a "starter pack" of standard apps containing common configuration items (like DS address, outputs.conf and such) and generally accept all hosts to a serverclass distributing current versions of those apps. So what heartbeat do you want?
I dont want to remove the overlay. I only want to remove the number 10. 
This is confusing because your search specifically sets the values you want to remove.  Simple solution is to remove the last pipe and eval of the additional field.  Assuming you need that for some a... See more...
This is confusing because your search specifically sets the values you want to remove.  Simple solution is to remove the last pipe and eval of the additional field.  Assuming you need that for some alternate reason then I would recommend. 1) Create base search "ds_base" don't include the pipe and eval of the overlay 2) Create the viz and map the data source to the base search 3) Create a chain search which has the pipe and eval of the overlay field and map it to the base 4) Map the alternate need to the chain search as the source
Currently the only event is an onClick type trigger as far as I can see in the documentation. https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/WhatNew Since it appears what you want to... See more...
Currently the only event is an onClick type trigger as far as I can see in the documentation. https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/WhatNew Since it appears what you want to do is trigger a search and then wait you might want to look into the recently added Submit button options which should allow you to trigger a data source research on demand.
Yes..... Is there a way to implement masking globally?  If not, I assume we to add each sourcetype in props.
Hi Splunkers, I have a question and I need help from experts, I'm working on creating a heartbeat tracker search that monitor when a host gets span up, and it's a window or Linux it gets generic app... See more...
Hi Splunkers, I have a question and I need help from experts, I'm working on creating a heartbeat tracker search that monitor when a host gets span up, and it's a window or Linux it gets generic apps from the server class, so there is a server class built out there that is just looking for any host that isn't already in the server class. So the purpose of the heartbeat tracker is to inform us that there is a brand-new host that isn't in the server class, so the ask is to track the hosts that showing up in the heartbeat index and if these hosts are there for multiple days that means they need to be addressed, as an example every host that get span up whether we know about it or not is going to get the heartbeat initially, so it's going to span up, and it's going to get the heartbeat and once it's get to its real app it's going to stop sending logs to the heartbeat index, so what I really want to know is per host how many days has it been talking to the X index so if I get a host that has been talking to the X index for several days then I know that isn't the initial start up, it's a problem that need to be looked at. | tstats count where index=X by host index span=1d _time
The python modules are stored in the <Addon-name>/bin/<addon_name>/aob_py3 directory. All you need to do isInstall the package using CLI with pip install <package_name> --target $SPLUNK_HOME/etc/app... See more...
The python modules are stored in the <Addon-name>/bin/<addon_name>/aob_py3 directory. All you need to do isInstall the package using CLI with pip install <package_name> --target $SPLUNK_HOME/etc/apps/<Addon-name>/bin/<addon_name>/aob_py3  Hope it helps @mninansplunk
Yes it is possible. Dropdown configuration: Rectangle configuration: My test run visual results: The left side is just a markup window which shows me the value of the token.  The drop... See more...
Yes it is possible. Dropdown configuration: Rectangle configuration: My test run visual results: The left side is just a markup window which shows me the value of the token.  The dropdown is initially populated by a local system index list.
I'm trying to build a Local Attack Range but it fails when it tries to restart the splunk.service. The Splunk instance does restart but fails when the systemctl command is implemented. I did insure t... See more...
I'm trying to build a Local Attack Range but it fails when it tries to restart the splunk.service. The Splunk instance does restart but fails when the systemctl command is implemented. I did insure that THPs was disabled, seLinux was disabled and ulimits were set properly on the host. It did increate the timeout but it fails to restart even after 30 minutes. The "python attack_range.py build" does successfully create the Splunk instance and installs all the required apps & TAs. It just fails to restart once the Splunk Enterprise as a systemd service within the Vagrant VM.  Any feedback would be appreciated!!! TASK [splunk_server_post : change password splunk] ***************************** changed: [ar-splunk-attack-range-key-pair-ar] TASK [splunk_server_post : restart splunk] ************************************* fatal: [ar-splunk-attack-range-key-pair-ar]: FAILED! => {"changed": false, "msg": "Unable to restart service splunk: Job for splunk.service failed because a timeout was exceeded.\nSee \"systemctl status splunk.service\" and \"journalctl -xe\" for details.\n"} RUNNING HANDLER [splunk_server_post : restart splunk] ************************** PLAY RECAP ********************************************************************* ar-splunk-attack-range-key-pair-ar : ok=139 changed=64  unreachable=0    failed=1    skipped=0    rescued=0    ignored=0    Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. 2024-09-19 16:22:49,709 - ERROR - attack_range - vagrant failed to build (attack-range-py3.8) aradmin@attackrange:~/attack_range$ Here is my attack_range yml file: general: attack_range_password: "xxxxx" cloud_provider: local use_prebuilt_images_with_packer: "0" ingest_bots3_data: "1" local: splunk_server: # Enable Enterprise Security install_es: "1" # Save to the apps folder from Attack Range splunk_es_app: "splunk-enterprise-security_732.spl" phantom_server: phantom_server: "0" # Enable/Disable Phantom Server kali_server: kali_server: "1" windows_servers: - hostname: ar-win-dc windows_image: windows-server-2022 create_domain: '1' install_red_team_tools: '1' bad_blood: '1' - hostname: ar-win-2 windows_image: windows-2019-v3-0-0 join_domain: '1' install_red_team_tools: '1' linux_servers: - hostname: ar-linux
I'm having issues with this transition as well and have not found a solution yet.  Anyone?
I have the Splunk App for SOAR Export running.  I can open one of the forwarding events, click "Save and Preview' and send any events into SOAR,  This is working.   I can go into the Searches, repor... See more...
I have the Splunk App for SOAR Export running.  I can open one of the forwarding events, click "Save and Preview' and send any events into SOAR,  This is working.   I can go into the Searches, reports, and alerts area find the alert the app created, it's scheduled, running and finding notables.  This is working. What's not working is when the schedule alert runs, what it finds never gets sent into SOAR. So, manually sending to SOAR works from the app, the scheduled alert the app uses is running and finding notables, but nothing ever goes into SOAR.  The owner is nobody for all of the searches.  Is this a permissions issue maybe?
Since nginx is forwarding some logs you know the connection is functional.  So then when you mention not all logs like WAF and DoS do you mean none of those message types are ingested at Splunk or ju... See more...
Since nginx is forwarding some logs you know the connection is functional.  So then when you mention not all logs like WAF and DoS do you mean none of those message types are ingested at Splunk or just some messages of those types are not ingested. If all messages like WAF and DoS then perhaps a filter update is required, what happens to messages that do no have a matching filter is there a catch all index setup? Any packet captures to demonstrate the WAF and DoS messages are forwarded from nginx to sc4s. 
I have a hidden search. When I have a result I want to set the token based on that result, otherwise if I don't have any results I want to set the token to *. However, this does not work for me yet (... See more...
I have a hidden search. When I have a result I want to set the token based on that result, otherwise if I don't have any results I want to set the token to *. However, this does not work for me yet (the no results part with setting the token to all).     <search id="latest_product_id"> <query> | mysearch | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>60</refresh> <depends> <condition token="some_token">*</condition> </depends> <done> <condition match="'job.resultCount'!= 0"> <set token="latest_product_id">$result.product_id$</set> </condition> <condition match="'job.resultCount'== 0"> <set token="latest_product_id">*</set> </condition> </done> </search>  
Perfect - thank you so much
Please share the raw events from the shared example. 
Hi @LizAndy123 , ok, it's the reverse condition: <your_search> | stats values(ProjectID) AS ProjectID BY Speed | sort -Speed | head 10 | table ProjectID Speed Ciao. Giuseppe
index= | bucket span=1s _time | stats count by _time | timechart max(count) AS Peak_TPS span=1d | eval overlay=10 I want to remove the number display on overlay
What is your search string behind the viz?  It could be as simple as appending the search with... | fields - overlay_field_name