All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It appears that this privilege is relegated to the "admin" and "power" roles, rather than a single Capability. I've tested this by making a test role inherit from the "power" role, then it let me up... See more...
It appears that this privilege is relegated to the "admin" and "power" roles, rather than a single Capability. I've tested this by making a test role inherit from the "power" role, then it let me upload an image. But when the test role is assigned all the same capabilities as the power role, it does not allow for uploading of images. Thus you will need to either inherit from the power or admin roles, or request someone with that inheritance or role assignment to upload the image for you. P.S. ChatGPT is off the mark. There is no "edit_visualizations" capability in Splunk.
Indeed, SOAR on-prem is in an awkward situation for OS support. SOAR on-prem only supports Amazon Linux 2, RHEL, or the end-of-support CENTOS. The SOAR automation broker runs on Debian, but that onl... See more...
Indeed, SOAR on-prem is in an awkward situation for OS support. SOAR on-prem only supports Amazon Linux 2, RHEL, or the end-of-support CENTOS. The SOAR automation broker runs on Debian, but that only helps you if you are using the Cloud version of SOAR. I believe there was some chatter in the #SOAR usergroup about adding support for 2 other CENTOS-related linux distros, but it's not there yet. You probably could get SOAR running on a distro similar to CENTOS, but you'd have to spend more time tinkering to get it working.
I see a few issues, but don't know that fixing them will solve the problem. 1. All of the sed commands are malformed.  There should be a single "s/" at the beginning and only 2 slashes before the fi... See more...
I see a few issues, but don't know that fixing them will solve the problem. 1. All of the sed commands are malformed.  There should be a single "s/" at the beginning and only 2 slashes before the final 'g'. 2: I'm not sure quotation marks are allowed in a stanza name. 3. "WinEventLog://" is a prefix for inputs.conf stanzas, not for props.conf. 4. Have you tried using a sourcetype name in the props.conf stanza rather than a source name? It would help to see some sample events and to know which parts of the events you wish to remove. If you want someone to confirm the inputs.conf stanza then you'll need to show the inputs.conf stanza.
Dears, Eum service and its db doesnt start automatically after restart in RHEL linux server. After every reboot need to start eum the its db manually. Is there any solution to automatically start th... See more...
Dears, Eum service and its db doesnt start automatically after restart in RHEL linux server. After every reboot need to start eum the its db manually. Is there any solution to automatically start the eum service and its db when there is server reboot.  Thank you..
Try something like this | where 'delta(avg(requestSize))' > 0 OR 'delta(avg(responseSize))' > 0
Iam using splunk with delta command             index=xxxx source=xxxx rcrdType=xxx | timechart span=1h avg(requestSize) avg(responseSize)|delta avg(requestSize) |delta avg(responseSize)  ... See more...
Iam using splunk with delta command             index=xxxx source=xxxx rcrdType=xxx | timechart span=1h avg(requestSize) avg(responseSize)|delta avg(requestSize) |delta avg(responseSize)           I need to modify the query to ONLY include those events where either delta avg(requestSize)  OR delta avg(responseSize) OR both are positive
In the Details page of the Splunk AI Assistant for SPL (Cloud Version), it says "Please do not enter any data that is personally identifiable, confidential, or otherwise sensitive." I would take this... See more...
In the Details page of the Splunk AI Assistant for SPL (Cloud Version), it says "Please do not enter any data that is personally identifiable, confidential, or otherwise sensitive." I would take this to mean it relies on the user not to put confidential information in. There is also a local version of the AI Assistant app which runs a pre-trained model that can formulate and explain SPL queries. This version does not learn using the input data, and the server it runs on can be isolated from the internet to ensure no customer data leaves the machine.
Hi, in our organization we use wef to monitor windows. we configure an inputs.conf for monitoring from the Event viewer. the powershell events (mainly event code 800 and 4103) logs received too... See more...
Hi, in our organization we use wef to monitor windows. we configure an inputs.conf for monitoring from the Event viewer. the powershell events (mainly event code 800 and 4103) logs received too long and we want to cut duplicated data. we tried various test with props.conf and transforms.conf and nothing works, here some of our stanzas we tried in props.conf [source::"XmlWinEventLog:Windows PowerShell"] SEDCMD-CLean_powershell_800 = s/s/\n\s+Context\sInformation\:.*([\r\n]+.*){0,500}////g SEDCMD-CLean_powershell_4103 = s/s/\s+Context\:.*([\r\n]+.*){0,500}////g [source::XmlWinEventLog:Microsoft-Windows-PowerShell/Operational] SEDCMD-CLean_powershell_800 = s/s/\n\s+Context\sInformation\:.*([\r\n]+.*){0,500}////g SEDCMD-CLean_powershell_4103 = s/s/\s+Context\:.*([\r\n]+.*){0,500}////g [WinEventLog://Microsoft-Windows-PowerShell/Operational] SEDCMD-CLean_powershell_800 = s/s/\n\s+Context\sInformation\:.*([\r\n]+.*){0,500}////g SEDCMD-CLean_powershell_4103 = s/s/\s+Context\:.*([\r\n]+.*){0,500}////g   also i wanted to make sure the inputs.conf stanza for powershell is correctly when i used : renderXml = true over: wec_event_format = rendered_event
Correct. The _raw field contains the data in the event that is indexed in Splunk. This data can differ from the raw data of the sending device depending on the index-time processing has been applied ... See more...
Correct. The _raw field contains the data in the event that is indexed in Splunk. This data can differ from the raw data of the sending device depending on the index-time processing has been applied to it.
_raw data exported from a search query. This not the actual raw data stream from the sending device, correct? This is the data after any default rules have been applied at index time. 
It's not that those indexes *should* not be replicated, it's that it's not a big deal if data is lost from them so you can save storage by not replicating them.  If you have a low risk tolerance then... See more...
It's not that those indexes *should* not be replicated, it's that it's not a big deal if data is lost from them so you can save storage by not replicating them.  If you have a low risk tolerance then feel free to set repFactor = auto for those indexes (it won't hurt anything).
I apologize. It does in fact work in a standard search window. However, the panel in the dashboard studio does not. Thanks for your time. 
OK Please share some representative sample events (anonymised as necessary) demonstrating how this solution fails.
Thanks, but no dice. 
Let's get back to basics: When your events are broken, using search technique to cope is the last thing to consider. Can you post sample raw file, the exact event contents Splunk receives, and your ... See more...
Let's get back to basics: When your events are broken, using search technique to cope is the last thing to consider. Can you post sample raw file, the exact event contents Splunk receives, and your properties.conf stanza corresponding to this sourcetype?  Without data, volunteers have nothing to go on.
DS is not aware of the structure and functionality of deployment apps. What is your use case? What do you want to achieve?
Try something like this | appendpipe [| stats count as "Total impact %"| where 'Total impact %'=0]
Hi @Naa_Win , it's always a best practice that all the Splunk Servers (so also DS) send their logs to the Indexers. So you could create an input on the DS that monitors the files in the /opt/splunk... See more...
Hi @Naa_Win , it's always a best practice that all the Splunk Servers (so also DS) send their logs to the Indexers. So you could create an input on the DS that monitors the files in the /opt/splunk/etc/deployment-apps folder, so they are indexed. Then you could display them in a custom dashboard, but it's all to develop, I don't know anything already existent. Ciao. Giuseppe
Hi @anil1219 , there a request in Splunk ideas to add a feature to do this, please upvote it! Anyway, using the solution you shared you should have 0 instead of "no results found". For the green c... See more...
Hi @anil1219 , there a request in Splunk ideas to add a feature to do this, please upvote it! Anyway, using the solution you shared you should have 0 instead of "no results found". For the green colour, in the classical interface dashboards (I'm not srìtill using Dashboard Studio) you should click on the pencil button in the panel of your dashboard. Then you can choose the colour and set the range for your colours. Ciao. Giuseppe
Hi @marka3721 , in this case, check the regexes used in transformations: take some log samples and put them in tegex101.com; then use this regex and see what it captures as group1 ^.+?devid=\"?F(?... See more...
Hi @marka3721 , in this case, check the regexes used in transformations: take some log samples and put them in tegex101.com; then use this regex and see what it captures as group1 ^.+?devid=\"?F(?:G|W|\dK).+?(?:\s|\,|\,\s)type=\"?(traffic|utm|event|anomaly) if it captures the correct extension of the sourcetype it's correct, otherwise, modify it to adapt it to your different log format. Only one final question: what's the sourcetype of your logs? it should be fortigate_log or fgt_log, otherwise transformations aren't taken in consideration. Ciao. Giuseppe