All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, @gcusello @dmarling  I Have a doubt in calculating the percentage. First query: (index=71412-cli sourcetype=show_interface) OR (index=71412-np sourcetype=device_details) | stats values(*) as ... See more...
Hi, @gcusello @dmarling  I Have a doubt in calculating the percentage. First query: (index=71412-cli sourcetype=show_interface) OR (index=71412-np sourcetype=device_details) | stats values(*) as * by deviceId |search deviceName ="BLV2-TI-SW_WAS18-01" |dedup interface | table deviceId interface deviceName adminStatus | sort interface | stats count(interface) as "Total no of ports"   from the first query I'm fetching the total no of interface second query (index=71412-cli sourcetype=show_interface adminStatus="down") OR (index=71412-np sourcetype=device_details) | stats values(*) as * by deviceId |search deviceName ="BLV2-TI-SW_WAS18-01" |search adminStatus=down |dedup interface | table deviceId interface deviceName adminStatus | sort interface | stats count(interface) as "Down ports" from the second query I'm fetching only the interfaces which are admin down I want to find the percentage of (Down ports/Total no of ports) * 100. please help me in finding the percentage by appending these two queries   thanks, priya
Hi, I am using the below query to get the logs of the status of App_State, Node_State & Sync_State: index=abc host=xyz | rex field=_raw "(?ms)Host\s+Id\s:(?<Host_ID>\d+)" | rex field=_raw "(?ms)Hos... See more...
Hi, I am using the below query to get the logs of the status of App_State, Node_State & Sync_State: index=abc host=xyz | rex field=_raw "(?ms)Host\s+Id\s:(?<Host_ID>\d+)" | rex field=_raw "(?ms)Host\s+Name\s:\s(?<Host_Name>\w+)" | rex field=_raw "(?ms)Host\s+Status\s:\s(?<Host_Status>[\w+\s]+)\sNode" | rex field=_raw "(?ms)Node\s+Id\s:(?<Node_ID>\d+)" | rex field=_raw "(?ms)Node\s+Name\s:\s(?<Node_Name>\w+)" | rex field=_raw "(?ms)Node\s+State\s:\s(?<Node_State>[\w\s]+\w)\s+App" | rex field=_raw "(?ms)App\s+Id\s:(?<App_ID>\d+)" | rex field=_raw "(?ms)Label\s+Name\s:\s(?<App_Name>\w+\S+)" | rex field=_raw "(?ms)Synchronization\s:\s(?<Sync_State>[\w\s]+Sync)\sState" | rex field=_raw "(?ms)Sync\sState\s:\s(?<App_State>[\w\s]+\w)\s+Number" | lookup host_lookup.csv "Host_Name" | eval Result=if(App_State=="Running", "Ok", "NotOk") | eval Result1=if(Node_State=="Running", "Ok", "NotOk") | eval Result2=if(Sync_State=="In Sync", "Ok", "NotOk")   Here, I want to create a single dashboard panel showing the outputs of Result,Result1 & Result2 combined. I am unable to determine which visualization I should be taking to get my desired view. Please help modify the query to get a single dashboard panel to show the outputs of Result,Result1 & Result2 combined. Thank you.
Hi, I have a loadjob command in one of my panel's in a dashboard . The report results I am trying to load is cron scheduled for every 15 minutes.  The Panel is set to refresh every 20 minutes.   Ve... See more...
Hi, I have a loadjob command in one of my panel's in a dashboard . The report results I am trying to load is cron scheduled for every 15 minutes.  The Panel is set to refresh every 20 minutes.   Very often the dashboard does not load any results, generally just saying "Waiting for queued job to start." I have tried adding the artefact offset to pull in past report output and get an error. What am I doing wrong?  also is it possible to pull the timestamp relating to report run results, I am trying to pull into my panel as well? <query>|loadjob savedsearch="user:Report Name"</query> <query>|loadjob savedsearch="user:Report Name":artifact_offset=2</query>  Error in 'SearchOperator:loadjob': Cannot find artifacts for savedsearch_ident 'user:Report Name:artifact_offset=1'. This will be a shared Dashboard, so I'm trying to create the dashboard in a way that each time it is opened it does not put pressure on the system. I would like to have all of my panels do the same.
Hi, Using Splunk Enterprise 7.3.5 Have the following components on separate servers: License Server Deployment Server Heavy Forwarder Search Head   Seems like the load on search head is high ... See more...
Hi, Using Splunk Enterprise 7.3.5 Have the following components on separate servers: License Server Deployment Server Heavy Forwarder Search Head   Seems like the load on search head is high and growing, but a lot of seems to come from these scripts /apps/splunk/splunk/bin/runScript.py /apps/splunk/splunk/etc/apps/website_monitoring/bin/web_ping.py   Any ideas to reduce this?   Thanks
I can't get any data from Azure . I understand that need to configure connection string and Event Hub name, but there is nothing show, even in internal index Paste the connection string from Step 1... See more...
I can't get any data from Azure . I understand that need to configure connection string and Event Hub name, but there is nothing show, even in internal index Paste the connection string from Step 1 Paste the Event Hub name from Step 2 From the server side, I can see traffics are being exchanged, assumed the connectivity is good. Is there anyway I can troubleshoot this issue to see what's going wrong? # tcpdump -i any host hsbc-multi-shrd-01-euw-evhub-mon-01.servicebus.windows.net -t tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes IP gbl20060520.hc.cloud##########.35030 > 10.102.144.196.amqps: Flags [P.], seq 1506474280:1506474345, ack 958390525, win 340, length 65 IP gbl20060520.hc.cloud##########.35028 > 10.102.144.196.amqps: Flags [P.], seq 871485393:871485446, ack 2983679033, win 1027, length 53 IP 10.102.144.196.amqps > gbl20060520.hc.cloud##########.35032: Flags [P.], seq 2685114633:2685115149, ack 3374421201, win 2048, length 516 IP 10.102.144.196.amqps > gbl20060520.hc.cloud##########.35028: Flags [P.], seq 1:7589, ack 0, win 2048, length 7588 IP gbl20060520.hc.cloud##########.35028 > 10.102.144.196.amqps: Flags [.], ack 7589, win 1145, length 0 IP 10.102.144.196.amqps > gbl20060520.hc.cloud##########.35030: Flags [P.], seq 1:1962, ack 65, win 2048, length 1961 IP gbl20060520.hc.cloud##########.35030 > 10.102.144.196.amqps: Flags [.], ack 1962, win 370, length 0 IP 10.102.144.196.amqps > gbl20060520.hc.cloud##########.35030: Flags [P.], seq 1962:3630, ack 65, win 2048, length 1668 IP gbl20060520.hc.cloud##########.35030 > 10.102.144.196.amqps: Flags [.], ack 3630, win 397, length 0 IP 10.102.144.196.amqps > gbl20060520.hc.cloud##########.35030: Flags [P.], seq 3630:6891, ack 65, win 2048, length 3261 IP gbl20060520.hc.cloud##########.35030 > 10.102.144.196.amqps: Flags [.], ack 6891, win 447, length 0   Thanks, Michael
Aside from using 'export' result, is there a way on doing a 'one shot archiving' of the indexer cluster using archiving policy of Splunk? If there is one, what are the recommended steps on doing so. ... See more...
Aside from using 'export' result, is there a way on doing a 'one shot archiving' of the indexer cluster using archiving policy of Splunk? If there is one, what are the recommended steps on doing so. Thank you.     Regards, Raj
Using a really basic search like the one illustrated in Example: Create a search, my freshly installed 8.1.2 responds with a lot more unrelated information in a format that is very different from exe... See more...
Using a really basic search like the one illustrated in Example: Create a search, my freshly installed 8.1.2 responds with a lot more unrelated information in a format that is very different from exemplified in the document, i.e., something like <?xml version='1.0' encoding='UTF-8'?> <response> <sid>1258421375.19</sid> </response>   (which was also how an older server responded.) Instead, the new server's response is like   <?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>jobs</title> <id>https://myserver:8089/services/search/jobs</id> <updated>2021-03-15T22:56:36+00:00</updated> <generator build="545206cc9f70" version="8.1.2"/> <author> <name>Splunk</name> </author> <opensearch:totalResults>3</opensearch:totalResults> <opensearch:itemsPerPage>0</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <entry> <title>| archivebuckets</title> <id>https://myserver:8089/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1</id> <updated>2021-03-15T22:17:01.161+00:00</updated> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1" rel="alternate"/> <published>2021-03-15T22:17:00.000+00:00</published> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/search.log" rel="search.log"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/search_telemetry.json" rel="search_telemetry.json"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/events" rel="events"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/results" rel="results"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/results_preview" rel="results_preview"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/timeline" rel="timeline"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/summary" rel="summary"/> <link href="/services/search/jobs/scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1/control" rel="control"/> <author> <name>splunk-system-user</name> </author> <content type="text/xml"> <s:dict> <s:key name="canSummarize">0</s:key> <s:key name="cursorTime">1970-01-01T00:00:00.000+00:00</s:key> <s:key name="defaultSaveTTL">604800</s:key> <s:key name="defaultTTL">600</s:key> <s:key name="delegate">scheduler</s:key> <s:key name="diskUsage">53248</s:key> <s:key name="dispatchState">DONE</s:key> <s:key name="doneProgress">1.00000</s:key> <s:key name="dropCount">0</s:key> <s:key name="earliestTime">1970-01-01T00:00:00.000+00:00</s:key> <s:key name="eventAvailableCount">0</s:key> <s:key name="eventCount">0</s:key> <s:key name="eventFieldCount">0</s:key> <s:key name="eventIsStreaming">1</s:key> <s:key name="eventIsTruncated">0</s:key> <s:key name="eventSearch">archivebuckets </s:key> <s:key name="eventSorting">none</s:key> <s:key name="isBatchModeSearch">0</s:key> <s:key name="isDone">1</s:key> <s:key name="isEventsPreviewEnabled">0</s:key> <s:key name="isFailed">0</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isPaused">0</s:key> <s:key name="isPreviewEnabled">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isRemoteTimeline">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isSavedSearch">1</s:key> <s:key name="isTimeCursored">0</s:key> <s:key name="isZombie">0</s:key> <s:key name="keywords"></s:key> <s:key name="label">Bucket Copy Trigger</s:key> <s:key name="latestTime">2021-03-15T22:17:00.000+00:00</s:key> <s:key name="normalizedSearch"></s:key> <s:key name="numPreviews">0</s:key> <s:key name="optimizedSearch">| archivebuckets</s:key> <s:key name="phase0"></s:key> <s:key name="phase1">archivebuckets | timeliner remote=0 partial_commits=0 max_events_per_bucket=500000 fieldstats_update_maxperiod=60 bucket=0</s:key> <s:key name="pid">825113</s:key> <s:key name="priority">5</s:key> <s:key name="provenance">scheduler</s:key> <s:key name="remoteSearch"></s:key> <s:key name="reportSearch"></s:key> <s:key name="resultCount">0</s:key> <s:key name="resultIsStreaming">1</s:key> <s:key name="resultPreviewCount">0</s:key> <s:key name="runDuration">0.89</s:key> <s:key name="sampleRatio">1</s:key> <s:key name="sampleSeed">0</s:key> <s:key name="savedSearchLabel">{"owner":"nobody","app":"splunk_archiver","sharing":"app"}</s:key> <s:key name="scanCount">0</s:key> <s:key name="search">| archivebuckets</s:key> <s:key name="searchCanBeEventType">0</s:key> <s:key name="searchLatestTime">1615846620.000000000</s:key> <s:key name="searchTotalBucketsCount">0</s:key> <s:key name="searchTotalEliminatedBucketsCount">0</s:key> <s:key name="sid">scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1</s:key> <s:key name="statusBuckets">0</s:key> <s:key name="ttl">4825</s:key> <s:key name="performance"> <s:dict> <s:key name="command.archivebuckets"> <s:dict> <s:key name="duration_secs">0.858</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">0</s:key> </s:dict> </s:key> <s:key name="command.timeliner"> <s:dict> <s:key name="invocations">1</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">0</s:key> </s:dict> </s:key> <s:key name="dispatch.createdSearchResultInfrastructure"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.archivebuckets"> <s:dict> <s:key name="invocations">2</s:key> </s:dict> </s:key> <s:key name="dispatch.finalWriteToDisk"> <s:dict> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.readEventsInResults"> <s:dict> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.timeline"> <s:dict> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.writeStatus"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">4</s:key> </s:dict> </s:key> <s:key name="startup.configuration"> <s:dict> <s:key name="duration_secs">0.02</s:key> <s:key name="invocations">2</s:key> </s:dict> </s:key> <s:key name="startup.handoff"> <s:dict> <s:key name="duration_secs">0.092</s:key> <s:key name="invocations">2</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="messages"> <s:dict/> </s:key> <s:key name="request"> <s:dict> <s:key name="auto_cancel">0</s:key> <s:key name="auto_pause">0</s:key> <s:key name="buckets">0</s:key> <s:key name="earliest_time"></s:key> <s:key name="index_earliest"></s:key> <s:key name="index_latest"></s:key> <s:key name="indexedRealtime"></s:key> <s:key name="indexedRealtimeMinSpan"></s:key> <s:key name="indexedRealtimeOffset"></s:key> <s:key name="latest_time">now</s:key> <s:key name="lookups">1</s:key> <s:key name="max_count">500000</s:key> <s:key name="max_time">0</s:key> <s:key name="reduce_freq">10</s:key> <s:key name="rt_backfill">0</s:key> <s:key name="rt_maximum_span"></s:key> <s:key name="sample_ratio">1</s:key> <s:key name="spawn_process">1</s:key> <s:key name="time_format">%FT%T.%Q%:z</s:key> <s:key name="ui_dispatch_app"></s:key> <s:key name="ui_dispatch_view"></s:key> </s:dict> </s:key> <s:key name="eai:acl"> <s:dict> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> <s:item>splunk-system-user</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>*</s:item> <s:item>splunk-system-user</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="owner">splunk-system-user</s:key> <s:key name="modifiable">1</s:key> <s:key name="sharing">global</s:key> <s:key name="app">splunk_archiver</s:key> <s:key name="can_write">1</s:key> <s:key name="ttl">7200</s:key> </s:dict> </s:key> <s:key name="searchProviders"> <s:list/> </s:key> </s:dict> </content> </entry> <entry> ... </entry> <entry> ... </entry> ... </feed>   So instead of one simple <sid/> property in <response/>, the SID is embedded in one of nested <entry><s:dict><s:key/> properties, like <s:key name="sid">scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1615846620_1</s:key>. (Even SID format is very different from the document.) In fact, the return is a job list instead of a single job. I am not sure if this makes a difference: I am using an authorization token to authenticate with the API.  The <author/> of the response, meanwhile, is always splunk-system-user instead of the user that the token belongs to. Additionally, I am not able to get any output when querying results of the returned SID.  In Splunk Web, all jobs submitted by splunk-system-user shows in application "splunk_archiver" instead of search which is the default application when I search in Splunk Web.  The user to which the authorization token belongs to has role of "user" and default app of "launcher" like any other user.
In the first picture, you can see the desired layout of the report in CSV. The second picture displays the corresponding results in Splunk after running the report. However, in the final pi... See more...
In the first picture, you can see the desired layout of the report in CSV. The second picture displays the corresponding results in Splunk after running the report. However, in the final picture you can see the exported CSV is hard to read, and the values of the far-right columns are one long string without newlines. Here's the SPL. index=index host=host sourcetype="db_backup_size" OR sourcetype=df | eval usage=Size+" "+Name, OldestBackup=OldestBackup+" "+Name | eval Environment="Production", "Server info (LDOM)"="LDOM", "Mount Point"="Mount Point", "Backup Filesystem"="/backup", "Local mount"="/b" | chart latest(Environment) as Environment, latest("Server info (LDOM)") as "Server info (LDOM)", latest("Mount Point") as "Mount Point", latest("Backup Filesystem") as "Backup Filesystem", latest("Local mount") as "Local mount", latest(Size) as "Allocation", latest(Used) as "Usage", values(usage) as "Usage for Each Database", values(OldestBackup) as "Oldest Backup Set" How can I export something that resembles the first picture? Thank you!
Hi,  I am trying to enable drill-down on only single column present in table in my dashboard named "Training_Link".  I tried using $column, not sure that's accurate. Then I shifted to $row, its wor... See more...
Hi,  I am trying to enable drill-down on only single column present in table in my dashboard named "Training_Link".  I tried using $column, not sure that's accurate. Then I shifted to $row, its working but drilldown in enabled on entire table.   <option name="drilldown">cell</option> <drilldown> <link target="_blank">$row.Training_Link|n$</link> </drilldown>   Any idea how can I do this ? - Rohan
So I have a timechart that I'm using to plot latencies. I am trying to just capture the seconds and miliseconds. I do not need the Date, Hours and Minutes, nor the AM / PM values. How do I truncate t... See more...
So I have a timechart that I'm using to plot latencies. I am trying to just capture the seconds and miliseconds. I do not need the Date, Hours and Minutes, nor the AM / PM values. How do I truncate the x axis labels to do such a thing?       index="test_index_3" | timechart values(value) as Messages span=1ms        
I just want to look for a hash signature in Splunk.  Example: d09a773dab9a20e6b39176e9cf76ac6863fe388d69367407c317c71652c84b9e What is the basic query please? 
Greetings all, I'm currently working on a A/B testing dashboard to see which landing page is having more engagement. One of my tasks is to know where are the users clicking once they arrive the land... See more...
Greetings all, I'm currently working on a A/B testing dashboard to see which landing page is having more engagement. One of my tasks is to know where are the users clicking once they arrive the landing page to know why they are exiting. E. g. If user clicked the home page button, if user clicked the "more info" button and so on.   Is there any way to list these URLs with the stats command, or another useful command.
Hi, a user wants to see the description of a report as well as the title. I know he could click the drop down arrow but he doesn't want to have to do that for multiple (similarly named) reports to fi... See more...
Hi, a user wants to see the description of a report as well as the title. I know he could click the drop down arrow but he doesn't want to have to do that for multiple (similarly named) reports to find the right one. Can I edit the table on that view to include the description for each report?   I can see his point, we have lots of reports so he could save time if that info was there.   Thanks  
Hi community. We are trying to integrate the new InfoSec app with the AD monitoring. For this, we have deployed a windows splunk that will ingest from our AD and forward the events to the UIX indexe... See more...
Hi community. We are trying to integrate the new InfoSec app with the AD monitoring. For this, we have deployed a windows splunk that will ingest from our AD and forward the events to the UIX indexer. Questions: - Do we need to install any AD add on on the windows HF? - Is the AD data CIM compliant by default? - We are running infosec at the UIX server, already CIM compliant. How do we manage the AD data that will be indexed if its not CIM comply by default? Do we need to make it compliant manually? Can we install any addon to make it compliant and just use the windows as a HF?   Thanks in advance.
Hi,  I tried to set an input which displays the name of the user who open the dashboard.  I tried a dropdown input  where the default is my username and it works well :  But my username appear... See more...
Hi,  I tried to set an input which displays the name of the user who open the dashboard.  I tried a dropdown input  where the default is my username and it works well :  But my username appears when other user open the dashboard with their own username (they have both mine and theirs), instead only theirs.  So I checked the code, and what I found is when I select mine as default, it retained mine in the code : <input type="dropdown" token="user_tok" searchWhenChanged="true"> <label>your username is :</label> <fieldForLabel>username</fieldForLabel> <fieldForValue>username</fieldForValue> <search> <query>| rest /services/authentication/current-context splunk_server=local | fields roles username | mvexpand roles | fields username</query> </search> <change> <condition match="isnotnull($value$) AND $value$!=&quot;&quot;"> <set token="tokTextFilter">$value$</set> </condition> </change> <default>gxxxx</default>   I tried to set the token input in the default :  <default>$user_tok$</default> But nothing changed. How can I set by default and displays only the username who goes to the dashboard ?  Thanks for your help !
Hi, My requirement is to move DB location of only one Index but not all Indexes in the same indexer. Is it possible ?  
I am currently building out an app in splunk for external customers to use. On each view in the app I have hidden the splunk bar for a cleaner UI and because we don't want customers messing with sett... See more...
I am currently building out an app in splunk for external customers to use. On each view in the app I have hidden the splunk bar for a cleaner UI and because we don't want customers messing with settings. I am building a settings page to make some changes to the default values on other dashboards, and would like to include an embedded view of the password change page and the users page. Is there any way to do this?
I'm a new Splunk user that needs to perform some analysis on a set of logs with the following format: Status, Starttime, Endtime, JobID, ReturnCode START,2021-03-15 10:56:15, ,123, END,2021-03-15 ... See more...
I'm a new Splunk user that needs to perform some analysis on a set of logs with the following format: Status, Starttime, Endtime, JobID, ReturnCode START,2021-03-15 10:56:15, ,123, END,2021-03-15 10:56:15,2021-03-15 10:56:27,123,0 ... For a single job, there are separate START and END logs which can be paired together by matching the start time and job ID using the 'transactions' command. When indexing these logs in Splunk, we've been parsing the Starttime field (2nd comma delimited value) as our timestamp but this means that all log entries with the 'END' status have a _time field corresponding to their start time rather than their end time. This leads to some extra work when filtering entries by time or calculating duration of jobs since we have to parse the Endtime field at search time. Is there any way to add some sort of condition which allows us to index a different timestamp field depending on the 'Status' field of an individual log? (i.e. if Status=START, take _time=Starttime but if Status=END, take _time=Endtime)
How to create a list of Forwarders & Indexers not functioning or having troubles.
Hello everyone Few words about setup that i have: -about 30 indexers in cluster. -following data chain: Syslog Forwarder ( or UF installed on servers) -> Intermediate forwarder(s) at particular ... See more...
Hello everyone Few words about setup that i have: -about 30 indexers in cluster. -following data chain: Syslog Forwarder ( or UF installed on servers) -> Intermediate forwarder(s) at particular sites -> Intermediate Forwarders tier ( 4 IFs machine) - > indexer tier. When i am checking indexing rate in DMC I can see that no all indexers have similar indexing rate: 4-5 of them have quite indexing rate, rest 15-20 have medium value and rest of them seems to be not used in indexing process. I want to ask if there is any way how to configure setup in order have balanced value of indexing rate  for all indexers, In other words i want to get situation that for most of indexers indexing rate will be at similar level. Should I use load balancing for achieving  this goal ? BR Dawid