All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, Can somebody help me to arrange the bar chart color based on log_level. Here In the chart attached, I want it to change the values based on the loglevel. When the log_level is Error, it ... See more...
Hi all, Can somebody help me to arrange the bar chart color based on log_level. Here In the chart attached, I want it to change the values based on the loglevel. When the log_level is Error, it should show it in red color, in Info it should show in Yellow and when in Warn level it should show it in green color. Here i tried many options, but nothing is working out. I want when the group values are in the loglevel, it should change the color automatically.
I have a lookup file, which is of the format: "Department", "Jan FY20", "Feb FY20", "Mar FY20", "Apr FY20" "Sales", "12", "15", "18", "17" "HR", "7", "5", "6", "11" Over time, the number of ... See more...
I have a lookup file, which is of the format: "Department", "Jan FY20", "Feb FY20", "Mar FY20", "Apr FY20" "Sales", "12", "15", "18", "17" "HR", "7", "5", "6", "11" Over time, the number of columns will increase, and their names may change, but they will always contain "FY". What I want to do is return the data in the form: department_name, month_name, value I.e. Sales, Jan FY20, 12 Sales, Feb FY20, 15 Sales, Mar FY20, 18 Sales, Apr FY20, 17 HR, Jan FY20, 7 HR, Feb FY20, 5 HR, Mar FY20, 6 HR, Apr FY20, 11 I'm sure that there's a simple function to do this (at search time), but I can't work it out. What is the best way to do this?
I would like to find occurences of Name and Prename in email logfiles and only report that ones that match both column of an inputlookup table. an Event from the email Server contains Envelop-Sen... See more...
I would like to find occurences of Name and Prename in email logfiles and only report that ones that match both column of an inputlookup table. an Event from the email Server contains Envelop-Sender(suser),Recipient(duser), Content-Sender(from) and some more fields that are not interesting for this task. My inputlookup table names.csv looks like name,prename,comment smith,winston,added on 15.05.20 when using two subsearches in a regular search like the following index="mail" sourcetype="mailserver" direction="incoming" [| inputlookup names.csv | eval from="*".name."*"| fields from| format] [| inputlookup names.csv| eval from="*".prename."*"| fields from| format] | fields suser,duser,from| format] all Matches are displayed. Matches for "name" and for "prename". I tryed to use the WHERE clause but the from field is not existing in the Input lookup table. i did not manage to find a regex that can search for both fields content (name and prename) in the same eval clause Independent of the location of the search Patterns in the target string. Maybe if clauses can be used in nested form? I tryed also some join like this (index="mail" sourcetype="mailserver" direction"incoming" [| inputlookup names.csv | eval from="*".prename."*"| fields from| format] ) join type=inner from ([ search index="mail" sourcetype="mailserver" direction="incoming" [| inputlookup names.csv| eval from="*".name."*"| fields from| format]]) |table _time,suser,duser,from But did not get any Matches, also the data do have from entries with both values (name,prename) in it. The lookup-table names.csv was created to be case-insensitive. Is there a way to join two Subsearches and get only the values that matched both searches ? Or is there an easy way to use a eval clause on a single field that can search for two search Patterns at the same time ? (Independent of Location of pattern in searchfield) Thank You
My set is up 2 sources imported from csv test1.csv test2.csv now both files have fields with dates in them 12_May 11_May 10_May etc the only different another file might not ha... See more...
My set is up 2 sources imported from csv test1.csv test2.csv now both files have fields with dates in them 12_May 11_May 10_May etc the only different another file might not have the 11_May so test1 10_May 11_May 12_May test 2 10_May 12_May so 11_May is missing from test2 so i can see 11_may when i use the source file test1 but if start adding the other file to the source such as test2, the search breaks. I will have many csv files being imported with missing fields for dates, this wont be consistent fields i have tried source="*" test1 OR test2 test1 AND test2 basically i want is if the field(Date) does not exist in one of the csv files to just add 0 into the column that we have created for all dates in the table. so it would be Test 1 got Name 10_May 11_May 12_May Total Joe 2 3 0 5 Test 2 Name 10_May 12 May Total Joe 2 0 2 Splunk Dashboard should show Name 10_May 11_May 12_May Total Joe 2 3 0 5 But the whole thing breaks when you are dealing with missing dates fields. Could you please put me on the right path on how i should be solving this. Thanks for reading.
Date="8 May 2020" Link="X" Status="UP" Date="9 May 2020" Link="Y" Status="DOWN" Date="10 May 2020" Link="X" Status="UP" Date="11 May 2020" Link="X" Status="DOWN" Date="12 May 2020" Link="Y" S... See more...
Date="8 May 2020" Link="X" Status="UP" Date="9 May 2020" Link="Y" Status="DOWN" Date="10 May 2020" Link="X" Status="UP" Date="11 May 2020" Link="X" Status="DOWN" Date="12 May 2020" Link="Y" Status="UP" Date="13 May 2020" Link="X" Status="DOWN" I am getting logs on daily basis in above format and data . I am looking to find field Link whose Status went down but never came up and on which date it went DOWN . For example in above case , Link X went down on 11 May but log on 13 May shows its still down so it went down on 11th and is down since 2 days . Following query works but issue is streamstats capture 10000 events by default so it doesn't get data for all links as logs are more than that . |makeresults | eval _raw="Date=\"8 May 2020\" Link=\"X\" Status=\"UP\" Date=\"9 May 2020\" Link=\"Y\" Status=\"DOWN\" Date=\"10 May 2020\" Link=\"X\" Status=\"UP\" Date=\"11 May 2020\" Link=\"X\" Status=\"DOWN\" Date=\"12 May 2020\" Link=\"Y\" Status=\"UP\" Date=\"13 May 2020\" Link=\"X\" Status=\"DOWN\"" | multikv noheader=t | kv | table Date Link Status | eval Date=strptime(Date,"%d %B %Y") | fieldformat Date=strftime(Date,"%F") | sort Link Date | streamstats current=f last(Status) as prev by Link | streamstats count(eval(Status!=prev)) as changed by Link | eventstats last(changed) as session by Link | where changed==session | stats min(Date) as start max(Date) as end values(Status) as Status by session Link | where Status="DOWN" | convert ctime(start) ctime(end) timeformat="%F"
i was trying to send the cloud watch data to splunk using the below blog https://www.splunk.com/en_us/blog/cloud/how-to-easily-stream-aws-cloudwatch-logs-to-splunk.html but i am getting this erro... See more...
i was trying to send the cloud watch data to splunk using the below blog https://www.splunk.com/en_us/blog/cloud/how-to-easily-stream-aws-cloudwatch-logs-to-splunk.html but i am getting this error : 2020-05-18T09:53:17.070Z XXXXXXX ERROR Invoke Error { "errorType": "Error", "errorMessage": "connect ETIMEDOUT X.X.X.X:8088", "code": "ETIMEDOUT", "errno": "ETIMEDOUT", "syscall": "connect", "address": "X.X.X.X", "port": 8088, "stack": [ "Error: connect ETIMEDOUT X.X.X.X:8088", " at TCPConnectWrap.afterConnect as oncomplete" ] }
Hi, How can i fetch result of an existing report in Splunk (report already executed) using a rest API. The report generates a table. Thanks, Santosh
i am trying to connect to my 2nd LDAP instance using the SA-LDAPSearch app (Splunk Supporting Add-on for Active Directory 3.0.1) and am getting the below error External search command 'ldaptes... See more...
i am trying to connect to my 2nd LDAP instance using the SA-LDAPSearch app (Splunk Supporting Add-on for Active Directory 3.0.1) and am getting the below error External search command 'ldaptestconnection' returned error code 1. First 1000 (of 1921) bytes of script output: "error_message= # host: <hostname>: Could not access the directory service at ldaps://<hostname>:<ldaps_port>: ('unable to open socket', [(datetime.datetime(2020, 5, 18, 10, 57, 16, 524688), , LDAPSocketOpenError('socket connection error while opening: [Errno 110] Connection timed out',), ('<ip_address>', <ldaps_port>)), (datetime.datetime(2020, 5, 18, 10, 57, 31, 532624), , LDAPSocketOpenError('socket ssl wrapping error: [Errno 104] Connection reset by peer',), ('<ip_address>', <ldaps_port>)), (datetime.datetime(2020, 5, 18, 10, 59, 38, 860630), , LDAPSocketOpenError('socket connection error while opening: [Errno 110] Connection timed out',), ('<ip_address>', <ldaps_port>)), (datetime.datetime(2020, 5, 18, 10, 59, 53, 851213), , LDAPSocketOpenError('socket ssl wrapping error: [Errno 104] Connection reset by peer',), ('". I do have a working ldap connection on a different domain that works fine and does not throw any error. Is there any configs that i am missing or is it an issue with the connectivity from my splunk server to the ldap server?
Hi , I have a requirement where I want to save the search query after the query has run to a file. Basically i want to have a file with Query name and the query so that users can save and load ba... See more...
Hi , I have a requirement where I want to save the search query after the query has run to a file. Basically i want to have a file with Query name and the query so that users can save and load back their queries in the Dashboard. Would like to know how can i save a query to a file using outputcsv in the Splunk Dashboard ? How do i get hold of that search query with resolved token values. Also is there a clean way that Splunk provides to save the dashboard query to a file? Thanks.
Since different users have different size screens for work. Is it possible that the Dashboard should adjust its size according to the screen size and panel values does not end up overlapping with eac... See more...
Since different users have different size screens for work. Is it possible that the Dashboard should adjust its size according to the screen size and panel values does not end up overlapping with each other? OR can the browser zoom gets adjusted automatically without the User setting it manually. I am attaching some screenshots with browser zoom set to 100% and 90% 100% and 90%
Hi All, I created an alert like below, which is working fine: index=rxc sourcetype="rxc_app" response_status=* [| inputlookup a.csv | rename site as header | fields header] earliest=-15m@m late... See more...
Hi All, I created an alert like below, which is working fine: index=rxc sourcetype="rxc_app" response_status=* [| inputlookup a.csv | rename site as header | fields header] earliest=-15m@m latest=now | stats count AS Total count(eval(response_status like "5%")) AS Error_Count by endpoint| eval Error_perc=round((Error_Count/Total)*100)|fields endpoint Error_Count Error_perc| where (Error_perc>1AND Error_Count>25) | join endpoint [search index=rxc sourcetype="rxc_app" response_status=5* [| inputlookup a.csv | rename site as header | fields header] earliest=-15m@m latest=now | stats count(eval(response_status like "5%")) As Error, values(header) as name by endpoint| where Error>25]| table enpoint name Error Error_perc But there is usually a spike in errors for about a minute or two, then it subsides. So what I want to create from this is a logic that can check for the percentage of errors every five minutes, and trigger an alert only after that threshold is breached for five consecutive minutes. For example, within the first five minutes, the error percentage was 10 and the error count was 485. Then the next five-minute error percentage decreased, and so did the error count, so it should not trigger an alert, but if it were continuous then it should.
Hello Everyone, I have a table like this: DVN. Region Name Count 201 SAM Shapes 20010 201 SAM Points 24218 ... See more...
Hello Everyone, I have a table like this: DVN. Region Name Count 201 SAM Shapes 20010 201 SAM Points 24218 202 SAM Shapes 20102 202 SAM Points 23231 I want to calculate difference between count values for rows whose Name is same but DVN is different. For ex.-- For Shapes name, difference between 3rd row and 1st row should be taken. My existing query to show this table looks like this: index=**| rex field=_raw "{.?(?{(.?})).}" | eval trimVal = trim(replace(ps, "\\", "")) | spath input=trimVal | where region=$region$ | where inputFeatureName="Shapes" OR inputFeatureName ="Points"| rename partitionName AS PartitionName, inputFeatureName AS FeatureName, inputFeatureCount AS FeatureCount, dvn AS DVN, region AS Region| where isnotnull(PartitionName) | table PartitionName,DVN,Region, FeatureName, FeatureCount | stats sum(FeatureCount) as Count by DVN,Region,FeatureName | sort Region Any help is appreciated, Thanks
I'm trying to plot count of errors from last week per day and daily average value from month. The result from query below gives me only result from Monday (other dayweeks are missing). What did I wro... See more...
I'm trying to plot count of errors from last week per day and daily average value from month. The result from query below gives me only result from Monday (other dayweeks are missing). What did I wrong? avg(count) DailyCount Dayweek 6903.6 3730 1 - Mon index="abc" sourcetype=alarms_log earliest=-30d@d latest=@d | bucket _time span=1day | stats count by _time | stats avg(count) | join [search index="abc" sourcetype=alarms_log earliest=-7d@d latest=-1d@d | timechart span=1d count as DailyCount | eval Dayweek=strftime(_time,"%w - %a") ] regards, Szymon
Hi All, I have an dashboard that has is a time chart and when I drilldown the server name, it has to fetch details from the DBX query. Instead of search servername in the dbx query, i tried to use ... See more...
Hi All, I have an dashboard that has is a time chart and when I drilldown the server name, it has to fetch details from the DBX query. Instead of search servername in the dbx query, i tried to use token value to pass the server name from the timechart. |search $tokvalue_XX$ On Main panel, <drilldown> <set token="tokvalue_XX">$click.name2$</set> <set token="clicked_earliest">$earliest$</set> <set token="clicked_latest">$latest$</set> </drilldown> For the drilldown part, <row depends="$tokvalue_XX$"> <panel> <title>Drilldown - $tokvalue_XX$ from $clicked_earliest$ to $clicked_latest$</title> <table> <search> <query>| dbxquery connection="CONN" query="SELECT QUERY" shortnames=true | search \"\*$$tokvalue_XX$$\*\" | table SERVER, PROCESS,SQL_ID,PREV_SQL_ID,"Lock offset","Elapsed Time Secs"</query> <earliest>$clicked_earliest$</earliest> <latest>$clicked_latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">cell</option> </table> </panel> </row> When I click on the server name from the timechart on the server name, the second window just shows but doesn't pull any data based on the query. The query works when its being passed with the servername in Splunk and DB connect. I have tried passing the value as $$tokvalue_XX$$ or $tokvalue_XX$ or "$tokvalue_XX$"
We have configure ES Splunk in which most of the dashboard are predefined. so Want to add severity field in vulnerability by age dashboard in ES Splunk. please help me to solve this.
Why i can't edit the correlation search or using search in splunk by extreme search such as:exwhere The error (Unknown search command 'xswhere'.) will show out. How can i fix it? if i only got e... See more...
Why i can't edit the correlation search or using search in splunk by extreme search such as:exwhere The error (Unknown search command 'xswhere'.) will show out. How can i fix it? if i only got ess_user permission, any advice can allow me to edit or search by extreme search?
Please refer the code below. I have base query and subquery,I didnt see any data in the panel,It is giving error "searching is waiting for Input..". Kindly verify and suggest how to resolve this is... See more...
Please refer the code below. I have base query and subquery,I didnt see any data in the panel,It is giving error "searching is waiting for Input..". Kindly verify and suggest how to resolve this issue? <form> <label>Power Data Fabric Dev OM Execution Metrics Clone</label> <description>Production System Metrics</description> <search id="base_Execution_Metrics"> <query>index=us_west_dev_power_platform sourcetype=om:omagent host="10.170.*" (ExecutionStatus="RS" OR ExecutionStatus="RF") | fields sourcetype, machine, esn, CalcId, ExecST, ExecWT, ExecutionStatus </query> <earliest>$timeRange.earliest$</earliest> <latest>$timeRange.latest$</latest> <sampleRatio>1</sampleRatio> </search> <fieldset submitButton="false"> <input type="time" token="timepick"> <label>Time Range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel id="ExecutionCountPerDay"> <title>Executions Count by Day</title> <chart> <search base="base_Execution_Metrics"> <query>| dedup sourcetype,machine,esn,CalcId,ExecST,ExecWT,ExecutionStatus | timechart span=1d count by ExecutionStatus|eval "Total Executions(RS + RF)" = RF+RS| rename RF as "Failed Executions(RF)"|table _time, "Total Executions(RS + RF)", "Failed Executions(RF)"</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.axisY.scale">log</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"Total Executions(RS + RF)":0xF1C40F, Failed Executions(RF)": 0xFF0000}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">bottom</option> <option name="link.exportResults.visible">1</option> <option name="link.visible">0</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>
Please any one help on this In indexer cluster environment one of the Indexer got stopped unable to start/restart C:\Windows\system32>d: D:>cd spluk\bin The system cannot find the path spec... See more...
Please any one help on this In indexer cluster environment one of the Indexer got stopped unable to start/restart C:\Windows\system32>d: D:>cd spluk\bin The system cannot find the path specified. D:>cd splunk\bin D:\Splunk\bin>.\splunk restart Splunkd: Stopped Splunk> All batbelt. No tights. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... (skipping validation of index paths because not running as LocalSystem) Validated: _audit _internal _introspection _telemetry _thef ishbucket aws_anomaly_detection aws_topology_daily_snapshot aws_topology_hi story aws_topology_monthly_snapshot aws_topology_playback aws_vpc_flow_logs history main summary Done Bypassing local license checks since this instance is configured with a rem ote license master. Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from 'D:\Splunk\splunk-7. 2.1-be11b2c46e23-windows-64-manifest' All installed files intact. Done Checking replication_port port [7778]: open All preliminary checks passed. Starting splunk server daemon (splunkd)... Splunkd: Starting (pid 6420) Timed out waiting for splunkd to start. please provide the solution if any one knows. Splunkd.log 05-18-2020 07:31:58.157 +0000 INFO ServerRoles - Declared role=cluster_slave. 05-18-2020 07:31:58.157 +0000 INFO ServerRoles - Declared role=indexer. 05-18-2020 07:31:58.157 +0000 INFO ClusteringMgr - initing clustering with: ht=60.000 rf=3 sf=2 ct=60.000 st=60.000 rt=60.000 rct=60.000 rst=60.000 rrt=60.000 rmst=180.000 rmrt=180.000 icps=-1 sfrt=600.000 pe=1 im=0 is=1 mob=5 mor=5 mosr=5 pb=5 rep_port=port=7778 isSsl=0 ipv6=0 cipherSuite= ecdhCurveNames= sslVersions=SSL3,TLS1.0,TLS1.1,TLS1.2 compressed=1 allowSslRenegotiation=1 dhFile= reqCliCert=0 serverCert= rootCA= commonNames= alternateNames= pptr=10 fznb=10 Empty/Default cluster pass4symmkey=true allow Empty/Default cluster pass4symmkey=true rrt=restart dft=180 abt=600 sbs=1 05-18-2020 07:31:58.172 +0000 INFO ClusteringMgr - Initializing node as slave 05-18-2020 07:31:58.172 +0000 INFO BucketReplicator - Initializing BucketReplicatorMgr 05-18-2020 07:31:58.219 +0000 INFO CMServiceThread - CMHealthManager starting eloop 05-18-2020 07:31:58.235 +0000 INFO CMBundleMgr - bundle=D:\Splunk\var\run\splunk\cluster\remote-bundle\2df598296706d9846433003de4c7a927-1589221919.bundle, checksum=5F5C9F53A58CD618B69209EBC5D92286 found on the slave 05-18-2020 07:31:58.235 +0000 INFO CMBundleMgr - setting active bundle= to latest bundle=6F0874F9DA123EA345D25A77F6D3CAFA 05-18-2020 07:31:58.235 +0000 INFO CMSlave - event=getActiveBundle status=success path=D:\Splunk\var\run\splunk\cluster\remote-bundle\83209f7543173582062b08f2b77fcde0-1589259155.bundle cksum=6F0874F9DA123EA345D25A77F6D3CAFA alreadyin=0 05-18-2020 07:31:58.235 +0000 ERROR CMSlave - event=move downloaded bundle to slave-apps failed with err="failed to remove dir=D:\Splunk\etc\slave-apps.old (There are no more files.)" even after multiple attempts, Exiting.. 05-18-2020 07:31:58.235 +0000 ERROR loader - Failed to download bundle from master, err="failed to remove dir=D:\Splunk\etc\slave-apps.old (There are no more files.)", Won't start splunkd.
Hi Everyone, Is there a method to dynamically set the border color of nodes? I can do it for the node fill color using the color field when generating search results however there isn't an option... See more...
Hi Everyone, Is there a method to dynamically set the border color of nodes? I can do it for the node fill color using the color field when generating search results however there isn't an option for border. Looking at the SimpleXML I can see it has an option flow_map_viz.flow_map_viz.node_broder_color which sets it for all and is fixed. I'm trying to make use of the border color to indicate an issue with the host node whereas the fill is the service running on it and being able to dynamically do this would be great. Thanks!
I want a master app to have menus that appear when other apps are installed. Now Splunk will not display a menu item if its dashboard does not exist, e.g. <view name="my_dashboard"/> mean... See more...
I want a master app to have menus that appear when other apps are installed. Now Splunk will not display a menu item if its dashboard does not exist, e.g. <view name="my_dashboard"/> means that for the app "my_app", the file /app/my_app/(default|local)/data/ui/views/my_dashboard.xml must exist, otherwise that dashboard will not show in the menu. What I want is to be able to do something like <view name="my_second_app:my_second_dashboard"/> (like you can reference stylesheets and script files from another app within a dashboard of a different app) so that if the app "my_second_app" is installed, it will show that item on the menu, but if not, it wont. The only way I can seem to get dashboards from other apps is with the <a href="href"> syntax, which of course will not hide non installed items.