All Topics

Top

All Topics

We want to set default TZ as SGT for a particular Search Head and that SH is in EDT TZ. We have already applied TZ setting in props settings at master for that index so they can view the related even... See more...
We want to set default TZ as SGT for a particular Search Head and that SH is in EDT TZ. We have already applied TZ setting in props settings at master for that index so they can view the related events when it is pushed. Now, application team wants in preferences settings it should be SGT as default in preferences settings so whenever any query is search for the index it should show as SGT TZ. as it can be seen in the sample events which is not coming as expected.  here is the btool results for the SH   -bash-4.2$ /opt/splunk/splunk_sas/bin/splunk btool --debug user-prefs list /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf [default] /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/splunk_instrumentation/local/user-prefs.conf [general] /opt/splunk/splunk_sas/etc/apps/splunk_instrumentation/local/user-prefs.conf dismissedInstrumentationOptInVersion = 4 /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf hideInstrumentationOptInModal = 1 /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf notification_python_3_impact = false /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf render_version_messages = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_assistant = compact /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_auto_format = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_line_numbers = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_syntax_highlighting = light /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf search_use_advanced_editor = 1 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf theme = enterprise /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf tz = GMT /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf [general_default] /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf appOrder = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf default_earliest_time = -24h@h /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf default_latest_time = now /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf default_namespace = $default /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf hideInstrumentationOptInModal = 0 /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf notification_noah_upgrade = true /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf notification_python_2_removal = false /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf notification_python_3_impact = false /opt/splunk/splunk_sas/etc/apps/user-prefs/default/user-prefs.conf showWhatsNew = 1 /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_admin] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_api] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_***] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_infra] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_app_splunk_power] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_general] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf [role_general_default] /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf appOrder = search /opt/splunk/splunk_sas/etc/apps/TA_***_LDAP/default/user-prefs.conf default_namespace = search /opt/splunk/splunk_sas/etc/system/local/user-prefs.conf tz = Asia/Hong_Kong
Can someone please help me in extracting the field Specific_DL_Testing from the below sample log. instance of the "\Specific_DL_Testing" task. The output should be Specific_DL_Testing
Hi Team, Does AppDynamics support UWP application integration?
Hello How can I update/change my display name in Splunk web site (My Dashboard Panel) and also in the education panel (My training)?
Hello team, I am getting email alert from my gmail ID, I want it from splunk@splunk.abc.com. What need to be done for this?     
Hi Team, I have logs coming from certain nodes and clusters. How can I detect if the logs go missing even from one of the clusters. The nodes and clusters are under the field name source.  For exam... See more...
Hi Team, I have logs coming from certain nodes and clusters. How can I detect if the logs go missing even from one of the clusters. The nodes and clusters are under the field name source.  For example, I have source = logs/node*c* node* has 3 to 4 nodes. c* have 8 to 10 clusters. I want to create an alert to notify if logs are missing even from one cluster. Thanks.
Hello, Can someone guide me on how can I ingest logs from a SFTP server? I have available Heavy Forwarders that sit outside the SFTP location.    Thanks!
I have table from on it when I click on edit button I'm taking value from that row and set to the filters above. In case of checkbox I have 2 choices, if on one of the system and Bi Report has... See more...
I have table from on it when I click on edit button I'm taking value from that row and set to the filters above. In case of checkbox I have 2 choices, if on one of the system and Bi Report has only one is selected, it's value get set in the filters above but if both system and Report is selected the filters not get set at all. Any help appreciated.  
Hi Team. I'm looking for a way to rename a correlation search that has been created with the wrong format. The CS is currently disabled but I don't see a way to actually rename it. If i delete it... See more...
Hi Team. I'm looking for a way to rename a correlation search that has been created with the wrong format. The CS is currently disabled but I don't see a way to actually rename it. If i delete it from Saved searches, Will it remove all notables which got created from this custom CS created? Regards Varun  
I am attempting to list all of my inputs that I have configured in "Splunk_TA_aws"   I am able to list all of my inputs set up as Cloudtrail, S3,  and SQS. I am unable to see my 20+ (all workin... See more...
I am attempting to list all of my inputs that I have configured in "Splunk_TA_aws"   I am able to list all of my inputs set up as Cloudtrail, S3,  and SQS. I am unable to see my 20+ (all working) inputs for Cloudwatch logs. I have attempted to hit: https://blahblah:8089/servicesNS/admin/Splunk_TA_aws/data/inputs/aws_cloudwatch_logs/ and https://blahblah:8089/servicesNS/admin/Splunk_TA_aws/data/inputs/aws_cloudwatch/ 0 results show up. As mentioned hitting the other endpoints for Cloudtrail, s3, and SQS are showing correctly. Is there another path/location to gather my inputs set up as Cloudwatch logs?
We're intermittently getting this error (so far twice in 2 weeks) when trying to use the lookup command on a kvstore. The full error message is " External command based lookup <kv_store> is not ava... See more...
We're intermittently getting this error (so far twice in 2 weeks) when trying to use the lookup command on a kvstore. The full error message is " External command based lookup <kv_store> is not available because KV Store status is currently unknown". We only found the error through the logs a few hours after the failure because the scheduled search with the lookup command didn't run successfully. When ran manually or on its next schedule, the search was running fine. KV store is also working as intended upon checking. I couldn't find information online on what the "unknown" status means regarding kv stores. Has anyone else seen this error?
Hi  We have a requirement to pull data from third-party aws account. Third party provider will push the data to a S3 bucket in their aws account and we are looking to pull that to an on-prem Splunk... See more...
Hi  We have a requirement to pull data from third-party aws account. Third party provider will push the data to a S3 bucket in their aws account and we are looking to pull that to an on-prem Splunk instance. There is an aws Splunk add-in splunkbase , are we able to use this add-on to pull data from a third-party aws account , if so how is it authenticated against third-party account? Please point me to any documentation available  Any suggestions?
Hi everyone! I'm still fairly new to Splunk so sorry if it is a simple question. I have some logs that does not show the field names when you have done a search. But when I expand the event, I... See more...
Hi everyone! I'm still fairly new to Splunk so sorry if it is a simple question. I have some logs that does not show the field names when you have done a search. But when I expand the event, I can see the names.   Is it not possible to have the field names shown in the first picture?  
I have 2 kind of logs where there are two types of uri which i want to rex into different fields {logType=DOWNSTREAM_RESPONSE, requestUri=https://google.come.com:8000/google/api/updateapi?&lo=en_US... See more...
I have 2 kind of logs where there are two types of uri which i want to rex into different fields {logType=DOWNSTREAM_RESPONSE, requestUri=https://google.come.com:8000/google/api/updateapi?&lo=en_US&sc=RT, duration=22, requestId=znXdSxbJQw6iVTtEeykZVA, globalTrackingId=null, requestTrackingId=null, request={body={"a":{"b":{"country":"US", }}}, method=POST, requestUri=https://google.come.com:443/google/api/updateapi?&lo=en_US&sc=RT}, response=(200 OK, { "body="{} }, "headers="{}, "statusCode=OK", statusCodeValue=200}")"} {logType=DOWNSTREAM_RESPONSE, requestUri=https://google.come.com:8000/google/api/deleteapi, duration=33, requestId=asdasd, globalTrackingId=null, requestTrackingId=null, request={body={"a":{"b":{"country":"US", }}}, method=POST, requestUri=https://google.come.com:443/google/api/updateapi?&lo=en_US&sc=RT}, response=(200 OK, { "body="{} }, "headers="{}, "statusCode=OK", statusCodeValue=200}")"} http= https URL= google.come.com:8000 service = /google api= /api/updateapi api= /api/deleteapi params= ?&lo=en_US&sc=RT is there a way to regex this?  
I have created a dashboard with multiple panels with each panel based on a dedicated report.  Here's an example. This works great but there must be a better way to do this instead of having 4 reports... See more...
I have created a dashboard with multiple panels with each panel based on a dedicated report.  Here's an example. This works great but there must be a better way to do this instead of having 4 reports per dashboard when the only difference between the search is the time.     <dashboard> <label>Packets</label> <row> <panel> <title>Daily</title> <chart> <search ref="Packets Daily"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>Weekly</title> <chart> <search ref="Packets Weekly"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>Monthly</title> <chart> <search ref="Packets Monthly"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>Yearly</title> <chart> <search ref="Packets Yearly"></search> <option name="charting.drilldown">none</option> </chart> </panel> </row> </dashboard>     What I would prefer to do is have one single report with each panel having the correct time frame.  I do not want a time picker displayed on the dashboard. I tried using earliest and latest syntax in each search section of the xml but the time is inherited from the report.  Thanks in advance.
Hello, I downloaded and installed Splunk Enterprise 9.x on my laptop. I am creating a Studio Dashboard and experiencing an issue with images. I am using a token in the image path to change the image... See more...
Hello, I downloaded and installed Splunk Enterprise 9.x on my laptop. I am creating a Studio Dashboard and experiencing an issue with images. I am using a token in the image path to change the image based on a dropdown input. Therefore, the images can't be held in the kvstore. They must remain local to my PC. In my production environment at work, RedHat, this works. The image path - within the DS Configuration Pane - is  /en-US/static/app/search/images Although the path on the Linux server is really etc/apps/search/appserver/static/images Splunk manipulates the path name in some strange way. apps becomes app, static is placed before the search (the app name).   I am using Windows on my laptop, and I can't figure out the correct path for the image URL. The Windows path for these images is C:\Program Files\Splunk\etc\apps\search\appserver\static\images What is Dashboard Studio Splunk-Speak for this path in Windows? Thanks in advance for your help. God bless, Genesius      
Hello Splunkers, Is there a way to add padding or control white space between the single value blocks in the aggregated fields? i.e I have three blocks created in single-value using the trellis v... See more...
Hello Splunkers, Is there a way to add padding or control white space between the single value blocks in the aggregated fields? i.e I have three blocks created in single-value using the trellis viz. This creates three nice blocks, but I would like to increase the white space between them if possible. Or use 3 separate single value viz, and space them with padding, etc. thank you for a great resource, eholz
I have this working query which needs some additional detailing. index=_internal earliest=-1h@h latest=@h | lookup api uri OUTPUT operation service | rex "duration=(?<response_time>[^,]+)" | mul... See more...
I have this working query which needs some additional detailing. index=_internal earliest=-1h@h latest=@h | lookup api uri OUTPUT operation service | rex "duration=(?<response_time>[^,]+)" | multikv | eval ReportKey="Today" | append [ search index=_internal earliest=-7d-1h@h latest=-7d@h | lookup api uri OUTPUT operation service | rex "duration=(?<response_time>[^,]+)" | multikv | eval ReportKey="lastweek" | eval _time=_time+604800] | stats first(uri) as apiName avg(response_time) as avgresponse_time count by operationName ReportKey Is there a way to get like this where it will compare all average response time and then give the percentile differences. operation uri today_avg (response_time) lastweek_avg (response_time) % differrences avg(response_time) today count last week count abc /api/abc 222 333   12312 42343 xyz /api/xyz 867 4234   87978 67867
Hello All - Is it possible to create a search or alert that is based on dynamic variables? The end goal I'm trying to achieve is to send an email if any of the tests exceed a 10% increase in run ti... See more...
Hello All - Is it possible to create a search or alert that is based on dynamic variables? The end goal I'm trying to achieve is to send an email if any of the tests exceed a 10% increase in run time.  I have the following search query which generates a table I want, however I want this to run every night on between versionN and versionN-1.       index="fe" source="regress_rpt" pipeline="soc" version IN("23ww10b","23ww11a") dut="*" (testlist="*") (testName="*") status="*" earliest=-1mon latest=now() | eval lastTestPathElement=replace(testPath, ".*/" ,"") |search lastTestPathElement="**" | chart max(cyclesPerCpuSec) AS max:cyclesPerCpuSec BY version lastTestPathElement | transpose header_field=version column_name=test_run | eval cycles_version_delta=('23ww11a' - '23ww10b') ,diff_percentage=round('cycles_version_delta'/'23ww11a' * 100, 1) ,status=if(diff_percentage < 10, "PASS", "FAIL")     Results Table test_run 23ww10b 23ww11a cycles_version_delta diff_percentage status basic_test 631.68 663.80 32.12 4.80 PASS basic_test.1 457.48 742.98 285.50 38.40 FAIL basic_test.2 730.04 691.25 -.38.79 -5.60 PASS   This search is hard-coded to the version 23ww10b and 23ww11a.  I'd like to be able to automatically run this search on the latest version and latest version - 1 as well as send an email if there is any FAIL in the status column.   What is the best way to do this, if even possible.   Thanks,   Phil
I am trying to build an Alert for login failures in AWS CloudTrail. In general I have it working -- but my joins are missing some of the desired events.  Specifically, I am building an 'index' value ... See more...
I am trying to build an Alert for login failures in AWS CloudTrail. In general I have it working -- but my joins are missing some of the desired events.  Specifically, I am building an 'index' value consisting of the username+IP, e.g. | eval user_IP = username + src_ip but I now see that some seemingly-identical values are being evaluated as separate. For instance, when you click on the Selected Values view (left-side in the results) there will be 2 separate entries which -- at least on-screen -- appear to be identical. WHAT THE POPUP SHOWS user_IP 2 Values, 100% of events Values                Count firstuser172.31.1.1   2 firstuser172.31.1.1   1 I suspect there is a hidden character in the second Value. Or, maybe a trailing space (though there is none when I try adding each to the search). ---- How can I modify my 'eval' to generate values without hidden characters? (I already tried adding a lower() function but without success)