All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This can be done by adding a props and transforms config file on the indexer machines. As an example, you could push an app to your indexers with: /<appname>/local/props.conf containing: [collecte... See more...
This can be done by adding a props and transforms config file on the indexer machines. As an example, you could push an app to your indexers with: /<appname>/local/props.conf containing: [collectedevents] # change_host and changehostfield are arbitrary values. Change how you like TRANSFORMS-change_host = changehostfield  -and- /<appname>/local/transforms.conf containing: # Stanza name must match whatever you set the "changehostfield" value in props.conf [changehostfield] DEST_KEY = Metadata:Host # Add your regex below REGEX=<Computer>([^<]+)<\/Computer> FORMAT = host::$1   Then the indexers should replace the host field with the value in the XML <Computer> field
@ChocolateRocket, the latitude and longitude fields are generated by the iplocation command and they are used to plot the data points on the map. You could remove them but then that would break the v... See more...
@ChocolateRocket, the latitude and longitude fields are generated by the iplocation command and they are used to plot the data points on the map. You could remove them but then that would break the visualization. Good luck, we're all counting on you.
@ITWhisperer  I tried the below search its not working at all. | where _time=$eventid$ OR EventID=$eventid$ OR Server=$eventid$ OR Message=$eventid$ OR Severity=$eventid$ When i keep this search i... See more...
@ITWhisperer  I tried the below search its not working at all. | where _time=$eventid$ OR EventID=$eventid$ OR Server=$eventid$ OR Message=$eventid$ OR Severity=$eventid$ When i keep this search in the pannel it gives all the desired results. But, when i search in the "textbox" like values of Severity(Critical or Warning or Information) its not working. when i search in the "textbox" like values of (EventID or Server or Message) it is working I think due to Severity is a custom field, so its not working i guess is this right? the EventID, Name as Message, host as Server fields are from _raw index=foo host=foo "$search$" OR Severity="$search$" | eval Severity=case(EventID="1068", "Warning", EventID="1", "Information", EventID="1021", "Warning", EventID="7011", "Warning", EventID="6006", "Warning", EventID="4227", "Warning", EventID="4231", "Warning", EventID="1069", "Critical", EventID="1205", "Critical", EventID="1254", "Critical", EventID="1282", "Critical") | rename Name as Message, host as Server | table _time EventID Server Message Severity any suggestions.
WARN .... Reason: binary. Should be right, since no binary is grant in props and by default is set to not get access to binaries. Descriptors to 100 is default, and it's ok, but should progress any... See more...
WARN .... Reason: binary. Should be right, since no binary is grant in props and by default is set to not get access to binaries. Descriptors to 100 is default, and it's ok, but should progress anyway. And in splunkd.log i can't see any WARN about descriptors. Now, why the "/etc" with all its ascii system files are not ingested since it's before the "/usr"?
We were able to address the issue It seems that there's a problem between Dynatrace RUM (real-time user monitoring) and the JS library used by SOAR-UI Basically, by lowering the intensity of the Dy... See more...
We were able to address the issue It seems that there's a problem between Dynatrace RUM (real-time user monitoring) and the JS library used by SOAR-UI Basically, by lowering the intensity of the Dynatrace Ruxit Agent, it is not overloading the browser stack and all the callbacks are now working properly
@tatdat171  I have also recently opened a case with Splunk support and it's in queue, not acknowledge yet. Please let me know if you have any updates/finding. Thank you.  
| bin _time span=1h | stats count as volume by _time component | bin _time span=1mon | chart max(volume) as volume by component _time | addtotals | eval Average=Total/3
Hi @phanikumarcs , sorry id I'm repeating: if you don't want to search a full text search on _raw, you have to declare the field to associate to each input (every kind of them). But you have to put... See more...
Hi @phanikumarcs , sorry id I'm repeating: if you don't want to search a full text search on _raw, you have to declare the field to associate to each input (every kind of them). But you have to put attention if some event's don't have one of the fields because the default (e.g. event_id=*) will exclude the events without this field. Ciao. Giuseppe
@gcusello @ITWhisperer  To clarify, my understanding is that if any fields are included in the '_raw' only will search for those fields, applicable to all input methods (text, dropdown, multi-select... See more...
@gcusello @ITWhisperer  To clarify, my understanding is that if any fields are included in the '_raw' only will search for those fields, applicable to all input methods (text, dropdown, multi-select, and others). Is that correct? In this case what is the solution for custom fields like in my query where field ("Severity") values (Critical, Warning, Information).
Hi @calvinmcelroy, Splunk can read LZW, gzip, bzip2, or any other compression format that supports streaming via stdin/stdout if properly configured, so I'm surprised you had problems with logrotate... See more...
Hi @calvinmcelroy, Splunk can read LZW, gzip, bzip2, or any other compression format that supports streaming via stdin/stdout if properly configured, so I'm surprised you had problems with logrotate. Is your configuration outside the norm? If you're running oneshot from a host with Splunk Enterprise installed, i.e. a heavy forwarder, then yes, you should have Palo Alto Networks Add-on for Splunk installed on the server.
Hi @aavyu20, I recommend contacting TrackMe support directly. Their contact information is available on the TrackMe Splunkbase page.
Hi @PReynoldsBitsIO, URL options are specified in $SPLUNK_HOME/etc/apps/Trellix_Splunk/appserver/static/js/build/globalConfig.json: ... { "field"... See more...
Hi @PReynoldsBitsIO, URL options are specified in $SPLUNK_HOME/etc/apps/Trellix_Splunk/appserver/static/js/build/globalConfig.json: ... { "field": "url", "label": "URL", "help": "Select a unique URL for this account. Refer to https://docs.trellix.com/ to get specific FQDN and Region for your account", "required": true, "type": "singleSelect", "options": { "disableSearch": true, "autoCompleteFields": [ { "value": "https://arevents.manage.trellix.com", "label": "Global" }, { "value": "https://areventsfrk.manage.trellix.com", "label": "Frankfort" }, { "value": "https://areventsind.manage.trellix.com", "label": "India" }, { "value": "https://areventssgp.manage.trellix.com", "label": "Singapore" }, { "value": "https://areventssyd.manage.trellix.com", "label": "Sydney" } ] } }, ... You may be able to add custom endpoints to this file following the pattern shown, but I recommend contacting the app developer directly to confirm. You can find their email address on the contact tab of other apps they've developed: https://splunkbase.splunk.com/apps?author=lgodoy
Hi @lumi, Although your command should work, you might try: $SplunkInstallationDir = "C:\Program Files\SplunkUniversalForwarder" & "$($SplunkInstallationDir)\bin\splunk.exe" start --accept-license ... See more...
Hi @lumi, Although your command should work, you might try: $SplunkInstallationDir = "C:\Program Files\SplunkUniversalForwarder" & "$($SplunkInstallationDir)\bin\splunk.exe" start --accept-license --answer-yes --no-prompt # or $SplunkExe = "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" & $SplunkExe start --accept-license --answer-yes --no-prompt To run "splunk start," the account should have Full Control permission on C:\Program Files\SplunkUniversalForwarder and all subdirectories and files. Ideally, the command should be executed by the service account, assuming the forwarder is also configured to run as a service.
Hi @mnj1809, The choice element value attribute is a literal string. The string is compared against the current token value to determine which radio button is selected. For example: <input type="r... See more...
Hi @mnj1809, The choice element value attribute is a literal string. The string is compared against the current token value to determine which radio button is selected. For example: <input type="radio" token="tokradio"> <choice value="1">One</choice> <choice value="2">Two</choice> <choice value="3">Three</choice> <default>1</default> </input> The default value of $tokradio$ is 1, and choice One is selected. If either a user interaction or dashboard code sets the value of $tokradio$ to 2 or 3, choice Two or Three is selected, respectively. If $tokradio$ is set to a value other than the value attributes defined in the choice list, e.g. 4, no choice is selected. If your goal is to user a radio input to select field names and a text input to enter field values, you can define and update a separate field when either token changes: <form version="1.1" theme="light"> <label>mnj1809_radio</label> <init> <set token="tokradiotext">$tokradio$="$toktext$"</set> </init> <fieldset submitButton="false"> <input type="radio" token="tokradio"> <label>Field</label> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <change> <set token="tokradiotext">$value$="$toktext$"</set> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <set token="tokradiotext">$tokradio$="$value$"</set> </change> </input> </fieldset> <row> <panel> <event> <title>tokradiotext=$tokradiotext$</title> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </event> </panel> </row> </form>  
Hi @sandfly_dev, Can you add an option to your configuration to allow customers to provide a list of trusted certificates in PEM or some or other format? This could be a single self-signed certifica... See more...
Hi @sandfly_dev, Can you add an option to your configuration to allow customers to provide a list of trusted certificates in PEM or some or other format? This could be a single self-signed certificate, a list of concatenated certificates in a certificate chain, etc. depending on what's supported by your code.
Hi @Intidev, I've achieved something similar in the past using a column chart with overlays. Hours are represented by two series, one for the hours before start of day (slack hours) and another for... See more...
Hi @Intidev, I've achieved something similar in the past using a column chart with overlays. Hours are represented by two series, one for the hours before start of day (slack hours) and another for working hours. Events are represented by further series, with no more than one event per series; the overlays will render single events as points rather than lines. Building on @Richfez's example with mock data: | makeresults format=csv data="day_of_week,slack_hours,work_hours,event01,event02,event03,event04,event05,event06,event07,event08,event09,event10,event11,event12,event13 2024-03-04,14,11,23,,,,,,,,,,,, 2024-03-05,0,0,,21,22,23,,,,,,,,, 2024-03-06,0,11,,,,,3,11,,,,,,, 2024-03-07,9,11,,,,,,,13,15,,,,, 2024-03-08,9,11,,,,,,,,,14,15,16,, 2024-03-09,0,0,,,,,,,,,,,,, 2024-03-10,9,11,,,,,,,,,,,,16,17" | eval _time=strptime(day_of_week, "%F") | chart values(slack_hours) as slack_hours values(work_hours) as work_hours values(event*) as event* over _time we can save a column chart into a classic dashboard with the following configuration: <dashboard version="1.1" theme="light"> <label>intidev_chart</label> <row> <panel> <html> <style> #columnChart1 .ui-resizable { width: 500px !important; } #columnChart1 .highcharts-series.highcharts-series-1.highcharts-column-series { opacity: 0 !important; } </style> </html> <chart id="columnChart1"> <search> <query>| makeresults format=csv data="day_of_week,slack_hours,work_hours,event01,event02,event03,event04,event05,event06,event07,event08,event09,event10,event11,event12,event13 2024-03-04,14,11,23,,,,,,,,,,,, 2024-03-05,0,0,,21,22,23,,,,,,,,, 2024-03-06,0,11,,,,,3,11,,,,,,, 2024-03-07,9,11,,,,,,,13,15,,,,, 2024-03-08,9,11,,,,,,,,,14,15,16,, 2024-03-09,0,0,,,,,,,,,,,,, 2024-03-10,9,11,,,,,,,,,,,,16,17" | eval _time=strptime(day_of_week, "%F") | chart values(work_hours) as work_hours values(slack_hours) as slack_hours values(event*) as event* over _time</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsY.majorTickVisibility">show</option> <option name="charting.axisLabelsY.majorUnit">1</option> <option name="charting.axisLabelsY.minorTickVisibility">hide</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.text">Hour</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.includeZero">1</option> <option name="charting.axisY.maximumNumber">24</option> <option name="charting.axisY.minimumNumber">0</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.markerSize">16</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">event01,event02,event03,event04,event05,event06,event07,event08,event09,event10,event11,event12,event13</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.placement">none</option> <option name="charting.fieldColors">{"work_hours": 0xc6e0b4}</option> <option name="height">500</option> </chart> </panel> </row> </dashboard> This gives us a dashboard similar to this: We can further manipulate the layout and colors with CSS and JavaScript (if available to us) and creative use of dashboard tokens.
The question is indeed a bit vaguely worded. But in general, you would first want to search only for authentication attempts (hard to say how since we don't know what data you have). It would be bes... See more...
The question is indeed a bit vaguely worded. But in general, you would first want to search only for authentication attempts (hard to say how since we don't know what data you have). It would be best if you had data normalized to CIM, then you could just seafch from the data model. Then you just do stats over your desired splitting fields. That should do the trick.
Count is not the same as volume. Unless you have a synthetic field added during ingestion (or use summary indexing), you have to calculate it manually (unfortunately you cannot use tstats for that so... See more...
Count is not the same as volume. Unless you have a synthetic field added during ingestion (or use summary indexing), you have to calculate it manually (unfortunately you cannot use tstats for that so it's gonna be costly since every single matching event has to be read and "measured") index=whatever <your other conditions> | eval eventlength=len(_raw) Now you can do some summarizing | bin _time span=1h | stats sum(eventlength) as volume by source component whatever This will give you one hour volumes. Now you can do with it whatever you want. Like the stats @ITWhisperer  already posted.
Hi @shocko, The typical approach discards lines at an intermediate heavy forwarder or indexer by sending them to nullQueue: # props.conf [my_sourcetype] # add line and event-breaking and timestamp... See more...
Hi @shocko, The typical approach discards lines at an intermediate heavy forwarder or indexer by sending them to nullQueue: # props.conf [my_sourcetype] # add line and event-breaking and timestamp extraction here TRANSFORMS-my_sourcetype_send_to_nullqueue = my_sourcetype_send_to_nullqueue # transforms.conf [my_sourcetype_send_to_nullqueue] # replace foo with a string or expression matching "keep" events REGEX = ^(?!foo). DEST_KEY = queue FORMAT = nullQueue As with @PickleRick, I've not seen a common use case for force_local_processing. I often say I don't want my application servers turning into Splunk servers, so I prioritize a lightweight forwarder configuration over data transfer. If CPU cores (fast growing files) and memory (large numbers of files) cost you less than network I/O, you may prefer the force_local_processing option; you won't save on disk I/O either way. If you need a refresher on the functions performed by the uft8, linebreaker, aggregator, and regexreplacement processors, see https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590781/highlight/true#M103485.
The depends and rejects options control visibility of the element.  That is the only function of the options.  To use a token in an element, just invoke the token name within the element.  There is n... See more...
The depends and rejects options control visibility of the element.  That is the only function of the options.  To use a token in an element, just invoke the token name within the element.  There is no need to "declare" the token as one might in a programming language.