All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk users For my company internal purpose i needed to publish dashboards to very limited number of users. The limitation was not based on user role, but strictly on ID and I didn't find a ... See more...
Hello Splunk users For my company internal purpose i needed to publish dashboards to very limited number of users. The limitation was not based on user role, but strictly on ID and I didn't find a solution suitable on forums. While tinkering with "depends" based on a couple of posts here  i came up with panels show hide combo with env:user token which I'm sharing right now. Users white list is managed by a simple eval/case statement directly inside dashboard's xml code for fast update. Panels visibility changes based on dropdown input controlled by that eval command. Since I have only basic user role I cannot make any server changes. If you have some idea on how to improve this concept please let me know. (ie. how to use "in" syntax instead of many case options) <form> <label>user_authority_test</label> <fieldset submitButton="false"> <input type="dropdown" token="field1" depends="$user_ok$"> <label>Limited input</label> <choice value="1">Limited visability</choice> </input> <input type="checkbox" token="field2"> <label>Unlimited input</label> <choice value="1">Visible for all</choice> <delimiter> </delimiter> </input> </fieldset> <row depends="$user_not_ok$"> <panel> <title>Message for limited users and/or some other panels</title> <html> <div>Message body</div> </html> </panel> </row> <row depends="$hide$"> <panel> <html> <h2>Debug</h2> <div>authority: $authority$</div> <div>user: $env:user$</div> </html> </panel> <panel> <table> <title>auth</title> <search> <query>|makeresults |eval user="$env:user$" |table user </query> <earliest>-1s</earliest> <latest>now</latest> <done> <!-- insted of user1, user2 etc. add desired user ID --> <eval token="form.authority">case($result.user$=="user1","on", $result.user$=="user2","on", 1=1, "off"</eval> </done> </search> </table> </panel> <panel> <!-- changing visibility of panels based on two step process, eval command modifies value of dropdown. Another token's embedded inside the dropdown controls which panels are visible and which are not You can actually create many sets of values in condition for more felxibility--> <input type="dropdown" token="authority"> <label>authority set</label> <choice value="on">On</choice> <choice value="off">Off</choice> <change> <condition value="on"> <set token="user_ok"></set> <unset token="user_not_ok"></unset> </condition> <condition value="off"> <unset token="user_ok"></unset> <set token="user_not_ok"></set> </condition> </change> </input> </panel> </row> <row depends="$user_ok$"> <panel> <table> <search id="MainSearch"> <query>index=_internal |head 5 |table sourcetype, _time</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> </table> </panel> <panel> <chart> <search base="MainSearch"> <query> |chart c(_time) by sourcetype</query> <progress> <set token="Area_Name">$Area_placeholder$</set> </progress> </search> <option name="charting.chart">pie</option> <option name="charting.chart.sliceCollapsingThreshold">0</option> </chart> </panel> </row> </form>  
Can someone provide the steps to add the TA file on Splunk IDM & Splunk cloud?
Hi Team, I am trying to remove the view permission of a particular user for a dashboard. I want the user should be able to view only 1 dashboard. This worked for the standard dashboard part but ... See more...
Hi Team, I am trying to remove the view permission of a particular user for a dashboard. I want the user should be able to view only 1 dashboard. This worked for the standard dashboard part but in the dash studio though the user was not able to see the data inside the dashboard but was able to see the list of dash studio dashboards. Can anyone help?  What do I need to do so the user can see only the dashboard that I want them to see? Regards, Shubhangi ^ Post edited  by @Ryan.Paredez formatting 
    12/27/21 6:42:50.000 AM PSComputerName Name Memory -------------- ---- ------ Host1 dfdf_Svc.exe 16024 Host1 sssService.exe 1... See more...
    12/27/21 6:42:50.000 AM PSComputerName Name Memory -------------- ---- ------ Host1 dfdf_Svc.exe 16024 Host1 sssService.exe 13142056 Host1 abcservice.exe 31380 Host1 xyzservice.exe 114340 Host1 rrrrr.exe 29304 12/27/21 6:42:50.000 AM PSComputerName Name Memory -------------- ---- ------ Host2 dfdf_Svc.exe 16064 Host2 sssService.exe 13144028 Host2 abcservice.exe 114708 Host2 xyzservice.exe 32248 Host2 rrrrr.exe 33616 I have these splunk output event in splunk logs. 1 event is for one specific server only. Under one server we have 5 services running and associated memory information. These output is in table format. I like to create regular expression so that I can create table format output as below: Servername Servicename Memory(in MB) (since above memory in bytes)    
Hi, I have 6 Alerts that run on a schedule. Only one of them is working. If I run the search results come back that match. Why would they not be triggering?  
Hi i am new to splunk. i have splink event like this " system CPU | 6.039 % | system time | 0.009 % | how can i get avg CPU % usage value against index type ? via report or dashboards.  
Hi, want to create a search to find anyone who does changes to the sAMAccountName  So sAMAccountName could be sAMAccountName=cdf or sAMAccountName=abc sAMAccountName=abc  if anyone changes this t... See more...
Hi, want to create a search to find anyone who does changes to the sAMAccountName  So sAMAccountName could be sAMAccountName=cdf or sAMAccountName=abc sAMAccountName=abc  if anyone changes this to sAMAccountName=abc1 triggers an alert.     
Dear Splunkers, Can you please assist with following problem: We have more 20 UF's installed on windows machines, all of them have deployment server set, and were visible in Forwarder Management.... See more...
Dear Splunkers, Can you please assist with following problem: We have more 20 UF's installed on windows machines, all of them have deployment server set, and were visible in Forwarder Management. But in some time all of them disappeared from FM and are appearing from time to time there. I have tried to delete $SPLUNK_HOME/etc/instance.cfg  on several forwarders and restarted them but problem was not fixed.   Any ideas how to fix it and what can cause such strange behavior?   Regards, Eugene
Hi Team, We could see paloalto network add-on parsing informational messages to alert datamodel (having tag=alert) assigned. Sharing the snap-shot for ref. Can anyone assist me to identify & some bu... See more...
Hi Team, We could see paloalto network add-on parsing informational messages to alert datamodel (having tag=alert) assigned. Sharing the snap-shot for ref. Can anyone assist me to identify & some business justification behind this please. Thanks in advance   
Hello, Is it possible to create correlation search in splunk ES app using REST API?
I am looking to monitor a folder audit that contains list of files which gets generated everyday automatically, below is how audit directory looks like(Below are the file names): Activity_Engine_20... See more...
I am looking to monitor a folder audit that contains list of files which gets generated everyday automatically, below is how audit directory looks like(Below are the file names): Activity_Engine_2021-12-18T14.51.04Z Activity_Engine_2021-12-19T02.53.38Z Activity_Engine_2021-12-19T15.00.28Z Activity_Engine_2021-12-20T03.00.30Z Windows Sample:  I am looking to only monitor the latest file and index the logs inside the file but I am not sure how to achieve this? Any help would be appreciated.
how to get splunk ES 7-Day sandbox?
Hi, We are running into an issue where the Splunk eStreamer Technical Add-On keeps crashing when receiving events from our Cisco Firepower instance. The exact error logs being observed on the Spl... See more...
Hi, We are running into an issue where the Splunk eStreamer Technical Add-On keeps crashing when receiving events from our Cisco Firepower instance. The exact error logs being observed on the Splunk side are as follows: 2021-12-17 09:19:23,377 root INFO 'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256) 2021-12-17 09:19:23,384 Writer ERROR [no message or attrs]: 'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256)\n'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256)Traceback (most recent call last):\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/baseproc.py", line 209, in receiveInput\n self.onReceive( item )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/baseproc.py", line 314, in onReceive\n self.onEvent( item )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 416, in onEvent\n write( item, self.settings )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 238, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/streams/file.py", line 96, in write\n self.file.write( data.encode( self.encoding ).decode('utf-8') )\nUnicodeEncodeError: 'latin-1' codec can't encode character '\u2013' in position 460: ordinal not in range(256)\n 2021-12-17 09:19:23,384 Writer ERROR Message data too large. Enable debug if asked to do so. 2021-12-17 09:19:23,385 Writer INFO Error state. Clearing queue We have also updated the TA to the latest version (4.8.3) as noted on the Splunk Add-On page for the app: https://splunkbase.splunk.com/app/3662/ On the HF side, we also increased number of worker processes from 4 to 8 which did not help.  Wondering if anyone experienced the same issues.  Let me know.  Thanks.
When restart the search head,Incident_review very very slow    
お世話になります。 現在、Splunkの8.1.xをwindow serverで運用する予定があり、 以下のデータを定期的にバックアップを取得しようと考えております。 ・設定情報(/SPLUNK_HOME/etc) ・インデックスデータ ・kvstore 質問① Robocopyを利用して以下の手順で増分バックアップを 考えているのですが、可能でしょうか? i.  コマンド"s... See more...
お世話になります。 現在、Splunkの8.1.xをwindow serverで運用する予定があり、 以下のデータを定期的にバックアップを取得しようと考えております。 ・設定情報(/SPLUNK_HOME/etc) ・インデックスデータ ・kvstore 質問① Robocopyを利用して以下の手順で増分バックアップを 考えているのですが、可能でしょうか? i.  コマンド"splunk backup kvstore~"を利用してkvstoreのバックアップを取得 ii.  Robocopyを利用して、以下のファイルのバックアップを取得  ・インデックスデータ(ホットバケツを除く)とkvstore             → ディレクトリ(/SPLUNK_HOME/var/lib/splunk)  ・設定ファイル → ディレクトリ(/SPLUNK_HOME/etc)   質問② Splunkのリストア方法について検証しているのですが、 Splunkの再インストール後、質問①でバックアップを取得したデータを 上書きする事でリストアする事は可能でしょうか?   以上、よろしくお願いいたします。    
Hi, I'm new to creating custom alert action & I'm following the documentations provided by Splunk to create this. While I've got my alert to work, however I couldn't find a mechanism to inject the fo... See more...
Hi, I'm new to creating custom alert action & I'm following the documentations provided by Splunk to create this. While I've got my alert to work, however I couldn't find a mechanism to inject the following two items to my application: The number of items in the search result The actual search query In my use-case I need both of them & I'm not sure how to do that. I tried following another solved answer on similar lines but this hasn't helped me so far. Here's what I did in the savedsearches.conf.:   ..... ..... action.tmc.param.result_count = $job.resultCount$ action.tmc.param.search_query = $job.search$ ..... ..... .....   I've also defined the savedsearches.conf..spec file as follows: ..... ..... action.tmc.param.result_count = <integer> action.tmc.param.search_query = <string> ..... ..... ..... However in my python script, when I print out the configuration sent out, I don't see these two arguments passed. I've restarted Splunk but that hasn't helped either. I would really appreciate if someone can please help guide me to the right direction. Thanks! 
Aloha,  In doing a little research we found a similar thread on Splunk Answers with the possible solution however there are somethings that we need clarifying.  Here's the URL to the Splunk Answers ... See more...
Aloha,  In doing a little research we found a similar thread on Splunk Answers with the possible solution however there are somethings that we need clarifying.  Here's the URL to the Splunk Answers for reference:  https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-use-a-value-from-a-different-field-in-drilldown/m-p/388556 Basically we have a search results with 5 columns and 10 rows containing random numbers on each cell and the requirement to is click into one of the numbers in the cells and open a new tab to another search or lookup file.   According to the Splunk Answer thread there's an option or variable for $row.column_one$, $row.column_two$, $row.column_three$... that can be used.  Here's snippet of the thread:   Is this true/correct?    How do we set or call these variables to point to a specific row.column.number for the $click.value$ ?     Is this based on or using a <condition> ? Thanks in advanced for your help.  
I am probably asking the most basic question ever, but I'm new to Splunk and just trying to figure out my host url. Examples I'm seeing on the internet regarding my particular use case look something... See more...
I am probably asking the most basic question ever, but I'm new to Splunk and just trying to figure out my host url. Examples I'm seeing on the internet regarding my particular use case look something like http://192.168.1.103:8000, but the only thing I've seen in my envinronment is localhost:8000 which doesn't work for what I need.    Trying to pull a dashboard into a web app, for reference on what I'm attempting to do.
I've got a log file that I am monitoring and where I am using a props.conf on the UF to monitor. I'm using the following settings: UF - props.conf: [my_sourcetype] INDEXED_EXTRACTIONS = JSON   S... See more...
I've got a log file that I am monitoring and where I am using a props.conf on the UF to monitor. I'm using the following settings: UF - props.conf: [my_sourcetype] INDEXED_EXTRACTIONS = JSON   Search head cluster  (via deployer in an app bundle) props.conf: [my_sourcetype] KV_MODE = NONE AUTO_KV_JSON = FALSE   If I run btool on one of the search heads for that sourcetype I get: ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = FALSE BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = /etc/datetime.xml DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false HEADER_MODE = KV_MODE = NONE LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = True TRANSFORMS = TRUNCATE = 10000 detect_trailing_nulls = false maxDist = 100 priority = sourcetype = termFrequencyWeightedDist = false   I'm not sure what I am missing but I can't get the duplicate field values to cease.
Hello, Looking for some assistance in reconstructing my query, which is currently using | transaction with a traceId value to tie together a couple different sourcetypes/sources. My query runs real... See more...
Hello, Looking for some assistance in reconstructing my query, which is currently using | transaction with a traceId value to tie together a couple different sourcetypes/sources. My query runs really slow, some of the sourcetype log results number in the 200million range so looking to speed it up using | stats by <traceId> instead to get the query to run faster. First source example snippet shows the highlighted traceId and 404 response code i am looking for. time=2021-12-11T23:59:51-07:00 time_ms=2021-12-11T23:59:51-07:00.620+ requestId=-1796576042 traceId=-1796576042 servicePath="/nationalnavigation/" remoteAddr=x.x.x.x clientIp=x.x.x.xclientAppVersion=NOT_AVAILABLE clientDeviceType=NOT_AVAILABLE app_version=- apiKey=somekey oauth_leg=2-legged authMethod=oauth apiAuth=true apiAuthPath=/ oauth_version=1.0 target_bg=default requestHost=services.timewarnercable.com requestPort=8080 requestMethod=GET requestURL="/nationalnavigation/V1/symphoni/event/tmsid/blah.com::TVNF0321206000538347?division=FTWR&lineup=15&profile=sg_v1&cacheID=959&longAdvisory=false&vodId=fort_worth&tuneToChannel=false&watchLive=true&watchOnDemand=true&rtReviewsLimit=0&includeAdult=f" requestSize=835 responseStatus=404 responseSize=420 responseTime=0.405 userAgent="Java/1.xxx" mapTEnabled="F" cClientIp="V-1|IP-x.x.x.x|SourcePort-12345|TrafficOriginID-x.x.x.x" sourcePort="12345" appleEgressEnabled="F" oauth_consumer_key="somekey" x_pi_auth_failure="-" pi_log="pi_ngxgw_access" second source example shows the REST server logs with an exception. 2021-12-11 23:59:51,261 ERROR [qtp1647496677-7239] [-1796576042] [c.t.a.n.r.s.r.s.SymphoniRestServiceBroker.handleNnsServiceErrorHeaders:1363] An internal service error occurred: com.twc.atgw.nationalnavigation.SymphoniWebException: Event Not Found Here's the current query i am looking to improve.     index=vap sourcetype=nns_all OR sourcetype=pi_ngxgw_access "nationalnavigation.SymphoniWebException: Event Not Found" OR "responseStatus=404" | rex "\] \[(?<traceId>.+)\] \[c.t.a.n.r.s.r.s" | transaction keepevicted=true by traceId | search "nationalnavigation.SymphoniWebException: Event Not Found" AND "responseStatus=404" | mvexpand requestURL | search requestURL="/nationalnavigation/V1/symphoni/series/tmsproviderprogramid*" OR "/nationalnavigation/V1/symphoni/event/tmsid*" | eval requestURLLength=len(requestURL) | rex field=requestURL "/nationalnavigation/V1/symphoni/event/tmsid/.*\%3A\%3A(?<queryString>.+)" | eval endpoint=case(match(requestURL,"/nationalnavigation/V1/symphoni/series/tmsproviderprogramid*"), "/nationalnavigation/V1/symphoni/series/tmsproviderprogramid", match(requestURL,"/nationalnavigation/V1/symphoni/event/tmsid*"), "/nationalnavigation/V1/symphoni/event/tmsid",1=1,requestURL) | rex field=queryString "(?<tmsIds>[^?]*)" | rex field=queryString "(?<tmsProviderProgramIds>[^?]*)" | eval assetIds=coalesce(tmsIds,tmsProviderProgramIds) | eval assetCount=mvcount(split(assetIds,",")) | stats count AS TxnCount by endpoint