All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team,   I am running Endgame API Add on 1.2.1 on my heavy forwarder(version 7.2.6). I am planning to upgrade HF to 8.1.2 but i dont see any compitable Endgame API addon. https://splunkbase.splu... See more...
Hi Team,   I am running Endgame API Add on 1.2.1 on my heavy forwarder(version 7.2.6). I am planning to upgrade HF to 8.1.2 but i dont see any compitable Endgame API addon. https://splunkbase.splunk.com/app/4267/   Can uou please suggest if Endgame API Add on 1.2.1 will work with Splunk enterprise 8.1.2 or if any other compitable Addon available for Splunk.   Regards,
We have indexers and Universal Forwarders and no Heavy Forwarders in use, i know UF cannot send parsed data to any external system it can only send uncooked data and all of them , but can the indexer... See more...
We have indexers and Universal Forwarders and no Heavy Forwarders in use, i know UF cannot send parsed data to any external system it can only send uncooked data and all of them , but can the indexers send the parsed logs(only specific e.g. from windows index) to external system, maybe through REST API or syslog or any other mechanism? the sequence would be like this : UF>>Indexers>>External System.
Hello, I wanted to change the actions or add a new action for the context menu of a field inside a log row. My first impression was that they are defined in "Workflow Actions". But its not showing u... See more...
Hello, I wanted to change the actions or add a new action for the context menu of a field inside a log row. My first impression was that they are defined in "Workflow Actions". But its not showing up there, therefore i guess its something completly different?  For illustration purpose please refer to the picture below: Hope you can help me out.
Hi Team, I am working on migrating Splunk standard alone installation to docker images, as part of this I have copied etc and var folder to new server and mounted as volumes into container, I am get... See more...
Hi Team, I am working on migrating Splunk standard alone installation to docker images, as part of this I have copied etc and var folder to new server and mounted as volumes into container, I am getting the below error while it is launching the container. I have added a new password as environment variable while running the container. Am I missing anything during the migration?   TASK [splunk_standalone : Setup global HEC] ************************************ splunkenterprise_1 | fatal: [localhost]: FAILED! => { splunkenterprise_1 | "cache_control": "private", splunkenterprise_1 | "changed": false, splunkenterprise_1 | "connection": "Close", splunkenterprise_1 | "content_length": "130", splunkenterprise_1 | "content_type": "text/xml; charset=UTF-8", splunkenterprise_1 | "date": "Mon, 19 Apr 2021 09:51:12 GMT", splunkenterprise_1 | "elapsed": 0, splunkenterprise_1 | "redirected": false, splunkenterprise_1 | "server": "Splunkd", splunkenterprise_1 | "status": 401, splunkenterprise_1 | "url": "https://127.0.0.1:8089/services/data/inputs/http/http", splunkenterprise_1 | "vary": "Cookie, Authorization", splunkenterprise_1 | "www_authenticate": "Basic realm=\"/splunk\"", splunkenterprise_1 | "x_content_type_options": "nosniff", splunkenterprise_1 | "x_frame_options": "SAMEORIGIN" splunkenterprise_1 | } splunkenterprise_1 | splunkenterprise_1 | MSG: splunkenterprise_1 | splunkenterprise_1 | Status code was 401 and not [200]: HTTP Error 401: Unauthorized splunkenterprise_1 | splunkenterprise_1 | PLAY RECAP ********************************************************************* splunkenterprise_1 | localhost : ok=56 changed=2 unreachable=0 failed=1 skipped=58 rescued=0 ignored=0
Hi All, I am trying to create a query for the below type of logs in a server: log1:  Thu Apr 15 07:31:31 EDT 2021 73% /var log2: Thu Apr 15 07:31:31 EDT 2021 46% /opt log3: Thu Apr 15 07:31:31 ED... See more...
Hi All, I am trying to create a query for the below type of logs in a server: log1:  Thu Apr 15 07:31:31 EDT 2021 73% /var log2: Thu Apr 15 07:31:31 EDT 2021 46% /opt log3: Thu Apr 15 07:31:31 EDT 2021 50% /apps log4: Thu Apr 15 07:31:31 EDT 2021 1% /logs log5: 1% /logs 50% /apps 46% /opt 73% /var Note: log5 is the just the combined values of log1 to log4. Here using the query "index=abc sourcetype=INFRA_FS | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | search host=29xyz | table File_System,Disk_Usage", I am getting the below table: File_System      Disk_Usage /var                         73 /opt                         46 /apps                      50 /logs                       1 /logs                        50 Here, an extra log is coming up in the table from log5. I want only the first 4 logs to be considered and the the 5 log should be avoided/removed.  Can anyone please help me to create a query to get the output in the desired way..?? You kind support will be highly appreciated. Thank you.
hi   I wonder why my verage line is not displayed in my timechart?   <search> <query>`CPU` | fields process_cpu_used_percent host | search host=$tok_filterhost$ OR host=$tok_f... See more...
hi   I wonder why my verage line is not displayed in my timechart?   <search> <query>`CPU` | fields process_cpu_used_percent host | search host=$tok_filterhost$ OR host=$tok_filterhost2$ OR host=$tok_filterhost3$ | timechart span=24h avg(process_cpu_used_percent) as "CPU used" by host useother=false | eventstats avg("CPU used") as Average | eval Average=round(Average,0)</query> <earliest>-30d@h</earliest> <latest>@d</latest> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.text">Date</option> <option name="charting.axisTitleY.text">CPU used (%)</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.chart">line</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">Average</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"T6999": DC4E41, "T5473": 53A051, "T5470": 0847EE, "Average":0xFF5A09}</option> <option name="charting.fieldDashStyles">{"CPU used":"solid"}</option> <option name="charting.fontColor">#000000</option> <option name="charting.lineWidth">4px</option> <option name="height">400</option> <option name="refresh.display">progressbar</option> </chart>   thanks in advance
Hey, I have a problem to parse my data:  19-04-2021 gil-server-1 {"systemId":"1254", "systemName":"coffe", "message":"hello dor"} I want to extract the fields before Splunk index the data.   How ... See more...
Hey, I have a problem to parse my data:  19-04-2021 gil-server-1 {"systemId":"1254", "systemName":"coffe", "message":"hello dor"} I want to extract the fields before Splunk index the data.   How to configure the props.conf or the transforms.conf?
Hello guys, I am new with a splunk and i need some help (also a splunk search language documentation with example). My search row is  index=waf source="waf_events" | stats count by remote_addr, msg... See more...
Hello guys, I am new with a splunk and i need some help (also a splunk search language documentation with example). My search row is  index=waf source="waf_events" | stats count by remote_addr, msg | mvcombine msg As a result I  have a row:  IP address       rule name                                                                  count 192.168.1.1    Anomally connection                                            1                             Bad user name     It show IP address and rules name, that IP address broke rules and a count of IP address, But i want, that  it will show a rule count. Something like this:  192.168.1.1   Anomally connection, Bad user name    2 (two rules)
Hello Splunkers, I have used unicode characters, to display trend, in my splunk dashboard. BUt the size of those characters are to small to be presented in dashboard. Is there a way to increase siz... See more...
Hello Splunkers, I have used unicode characters, to display trend, in my splunk dashboard. BUt the size of those characters are to small to be presented in dashboard. Is there a way to increase size or to color those icons?    
Hi all,   I am trying to create a fourth column which would display all values between a given time range in the single cell. For the screenshot provided this would mean a cell in the fourth co... See more...
Hi all,   I am trying to create a fourth column which would display all values between a given time range in the single cell. For the screenshot provided this would mean a cell in the fourth column containing: 10:00 11:00 12:00   I am not sure how to do this without using a loop.   Thanks Jacob
Hello Splunkers, I would like to create a timechart for status. The data only comes when there's an update, so generally, one event when the ticket opens, and one when it closes. How should I approa... See more...
Hello Splunkers, I would like to create a timechart for status. The data only comes when there's an update, so generally, one event when the ticket opens, and one when it closes. How should I approach visualising this?   Data ~ Problem ID Start Time End Time ManagementZone 10 09:00 null CAT 11 09:00 null DOG 10 09:00 09:30 CAT 12 10:00 null CAT 13 10:00 null DOG 11 09:00 11:30 DOG 12 10:00 11:30 CAT 13 10:00 12:00 DOG 14 15:00 null CAT 15 15:30 null DOG   Desired Outcome   08:00 09:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 CAT 0 1 1 1 0 0 0 1 1 1 DOG 0 1 2 2 0 0 0 0 1 1   Thank you all in advance. 
Hello Splunk Community, I would like to add time to an event, but at the same time keep the present time. So, start with present time and also add one day to the event... Below I have the concept, b... See more...
Hello Splunk Community, I would like to add time to an event, but at the same time keep the present time. So, start with present time and also add one day to the event... Below I have the concept, but I still need some guidance on how to include present time results while also adding one day to the event. Any advise out there?    | makeresults | eval start_time=relative_time(_time,"-1d@d") | eval end_time=start_time+3600*24 + 1 | eval the_time=mvrange(start_time, end_time, 3600) | mvexpand the_time | rename the_time as _time | table _time  
Howdy Guys, We were getting windows event Application logs through, with a simple stanza previously, that would be whitelisting only the 11707 event. The data was coming through in non xml, and was ... See more...
Howdy Guys, We were getting windows event Application logs through, with a simple stanza previously, that would be whitelisting only the 11707 event. The data was coming through in non xml, and was rather clean when searching for these events in Splunk. However, recently we deployed the "Splunk_TA_windows" to all desktops, which included the Windows Application win event logs, but this is sending them in XML format.  This is ok, as I believe this is preferred for licensing/ingestion in splunk, but it now means one of our simple reports no longer is working as the fields it as looking for are no longer there (Windows TA seems to be taking over the previous simple app 11707 event  ingestion) It appears the TA does not extract anything out from the <EventData></EventData> just only grabs the whole block, however I am interested in getting the "Product:" from that block. Sample data:             <EventData><Data>Product: Microsoft Visual C++ 2010 x64 Redistributable - 10.0.40219 -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B31443845363239312D423044352D333545432D383434312D3636313646353637413046377D</Binary></EventData></Event> <EventData><Data>Product: Surface Pro Update 21.012.27402.0 (64 bit) -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B42423130324541442D453133362D343946382D384645312D4138383831373442364646397D</Binary></EventData></Event> <EventData><Data>Product: Surface Pro Update 19.092.25297.0 (64 bit) -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B36363938354431392D323831452D343736442D394242452D3645453944464131413433387D</Binary></EventData></Event>               Given I suck at REGEX, how could I extract "Product:*" from the above events? so I could add it to a local/transforms.conf to extract the string I need? [product_string_for_11707_events] REGEX = ?????? FORMAT = product::"$1" Any and all assistance appreciated.  
Howdy Guys, We were getting windows event Application logs through, with a simple stanza previously, that would be whitelisting only the 11707 event. The data was coming through in non xml, and was ... See more...
Howdy Guys, We were getting windows event Application logs through, with a simple stanza previously, that would be whitelisting only the 11707 event. The data was coming through in non xml, and was rather clean when searching for these events in Splunk. However, recently we deployed the "Splunk_TA_windows" to all desktops, which included the Windows Application win event logs, but this is sending them in XML format.  This is ok, as I believe this is preferred for licensing/ingestion in splunk, but it now means one of our simple reports no longer is working as the fields it as looking for are no longer there (Windows TA seems to be taking over the previous simple app 11707 event  ingestion) It appears the TA does not extract anything out from the <EventData></EventData> just only grabs the whole block, however I am interested in getting the "Product:" from that block. Sample data:   <EventData><Data>Product: Microsoft Visual C++ 2010 x64 Redistributable - 10.0.40219 -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B31443845363239312D423044352D333545432D383434312D3636313646353637413046377D</Binary></EventData></Event> <EventData><Data>Product: Surface Pro Update 21.012.27402.0 (64 bit) -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B42423130324541442D453133362D343946382D384645312D4138383831373442364646397D</Binary></EventData></Event> <EventData><Data>Product: Surface Pro Update 19.092.25297.0 (64 bit) -- Installation completed successfully.</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data>(NULL)</Data><Data></Data><Binary>7B36363938354431392D323831452D343736442D394242452D3645453944464131413433387D</Binary></EventData></Event>     Given I suck at REGEX, how could I extract "Product:*" from the above events? so I could add it to a local/transforms.conf to extract the string I need? [product_string_for_11707_events] REGEX = ?????? FORMAT = product::"$1" Any and all assistance appreciated.  
Hi, I have very large dataset that appears as multivalued as below:     | makeresults | eval data1="Windows_7,Unknown,Windows_2012,Windows_7,Windows_8,Windows_10" | eval data2="LAPTOP PC,SERVER,... See more...
Hi, I have very large dataset that appears as multivalued as below:     | makeresults | eval data1="Windows_7,Unknown,Windows_2012,Windows_7,Windows_8,Windows_10" | eval data2="LAPTOP PC,SERVER,APPLIANCE,DESKTOP,ROUTER,SWITCH" | makemv delim="," data1 | makemv delim="," data2     When I try to use mvexpand using below technique, it exceeds default memory limit of 500MB. I did increase it to 5000MB but problem remains. I did try to limit _raw and multiple other techniques but to no avail. I can not use stats because cardinality is high as I have more fields, ~ 8 fields to expand. Below is a great solution which I have been using at many places for smaller dataset but not for larger dataset. For example, my saved search produces ~6000 records. When those records are to be expanded, I am expecting them to be around 30,000 rows. I need output to be like below but without mvexpand so we do not have to worry about mvexpand memory limits yet still get complete dataset as needed. Is this possible?     | makeresults | eval data1="Windows_7,Unknown,Windows_2012,Windows_7,Windows_8,Windows_10" | eval data2="LAPTOP PC,SERVER,APPLIANCE,DESKTOP,ROUTER,SWITCH" | makemv delim="," data1 | makemv delim="," data2 | eval Asset_Category=mvrange(0,mvcount(data1)) | mvexpand Asset_Category | eval data1=mvindex(data1,Asset_Category) | eval data2=mvindex(data2,Asset_Category)       Thanks in-advance!!!!
  For example, on a railroad schematic diagram, based on query data output? By “dynamically”, I’d like to show an icon (of alarm) at a position when there is a data value exceeding a threshold corre... See more...
  For example, on a railroad schematic diagram, based on query data output? By “dynamically”, I’d like to show an icon (of alarm) at a position when there is a data value exceeding a threshold corresponding to the position on the schematic diagram. If the corresponding value is below the threshold, then, no icon should be shown. Essentially, I want to show an alarm at the that position of the diagram. It would be similar to an example of the room occupancy with “Splunk Dashboards (beta)” (app/splunk-dashboard-app/example-hub-workplace-readiness-detail), except that I only want to show those with the occupancy higher than a threshold, not those rooms with occupancy less than the threshold. For the static icon display, it is similar to Splunk’s visualization of “choropleth map”, but the coordinates have no geolocation bearing. Rather I just want to place icons of my choosing on arbitrary coordinates on the schematic diagram, based on the data value whether exceeding a threshold. I have explored to use “Splunk Dashboards (beta)”, but it does not permit to dynamically display and hide such alarm. I wonder what would be a possible approach? I heard that Splunk’s custom renderer might be able Or, I should just export the data from Splunk to do the special visualization external to Splunk? Is there any example of “custom renderer” overlaying visualization of query data on top of a schematic diagram, with dynamic behavior of showing and hiding? Some pointers and examples would be appreciated. Thanks in advance!  
Can I get someone to help me trouble shoot my problem on fundamental lab module 5? I'm running the exercise but I'm getting no results and I don't know why yet.
Hi Team, I have Created a Splunk Dashboard and trying to get Percent Symbol (%) only on the Bar's in the Column Chart(Chart-Over Lay). I tried this .js file i got it from (https://community.spl... See more...
Hi Team, I have Created a Splunk Dashboard and trying to get Percent Symbol (%) only on the Bar's in the Column Chart(Chart-Over Lay). I tried this .js file i got it from (https://community.splunk.com/t5/Splunk-Search/How-to-add-symbol-with-data-labels-in-charts/m-p/361595) require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function($,mvc){ mvc.Components.get("myHighChart").getVisualization(function(chartView) { chartView.on("rendered", function() { $("g.highcharts-data-label text:not(:contains(%)) tspan").after(" %"); $("g.highcharts-yaxis-labels text:not(:contains(%)) tspan").after(" %"); }); }); });   where i was getting % symbol on both the bar values and the chart-overlay values .   For me, I need % only in Bar graph. Please help me. Thanks in Advance.        
Hello, i' trying (without success...) to use a custom search to get the list of possible values of a field in a drill down input field of a Dashboard i'm working on. Here is the search i use to loo... See more...
Hello, i' trying (without success...) to use a custom search to get the list of possible values of a field in a drill down input field of a Dashboard i'm working on. Here is the search i use to look at "exotic" Firewall logs and extract the possible RULE_NAME field values : index=my_index | rex field=_raw ".*\s(?<HOSTNAME>\S+)\s(?<PROCESS>\S+):\s.*\s(?<ACTION>(Allow|Deny))\s(?<SRC_INT>\S+)\s(?<DST_INT>\S+)\s.*(?<PR>(icmp|tcp|udp)).*\s(?<SRC_IP>[[octet]](?:\.[[octet]]){3})\s(?<DST_IP>[[octet]](?:\.[[octet]]){3})\s(?<SRC_PORT>\d{1,5})\s(?<DST_PORT>\d{1,5})\s.*\((?P<RULE_NAME>.*)?(-00)\)$" | stats values(RULE_NAME) | sort -n   => it is displaying with success what i need if i run it into a simple searh window.   here is what i put in my fields of the Dropdown menu : Input Type : Drop down Label  : Rule Name Token : rule_name_token Created a STATIC OPTION named ANY with "*" as value (also set as default one) Field For Label : RULE_NAME Field For Value : RULE_NAME   Problem : Nothing appears but ANY in my dropdown list (even if a can see briefly the "populating..." keywork displayed under this input dropdown menu during 1 second). Any help please ? i certainly missed something ? Thanks Florent  
How do I find which Splunk server is my designated Search head cluster?