All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, so I just try configure 1 Splunk Enterpise and 1 server installed with universal forwarder+ Splunk Add-on for Unix and Linux. When I check the event from Splunk server, I got this unusual result:... See more...
Hi, so I just try configure 1 Splunk Enterpise and 1 server installed with universal forwarder+ Splunk Add-on for Unix and Linux. When I check the event from Splunk server, I got this unusual result: It is like my log not parsed, so there is no field about disk usage, cpu usage, etc. And because of that, when I use Splunk Apps for Unix and Linux for monitoring. I got below result:   How to solve this problem? I just follow the documentation and dont know how to solve this issue.
Hi, Is there a way to get the list of Public Static IP's of Hosted Synthetic Agents deployed across the globe for Synthetic Monitoring to whitelist in our firewall? I have whitelisted the below IP's... See more...
Hi, Is there a way to get the list of Public Static IP's of Hosted Synthetic Agents deployed across the globe for Synthetic Monitoring to whitelist in our firewall? I have whitelisted the below IP's in my firewall for port 80 and 443 as per https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges 18.134.158.244/32 18.166.70.91/32 3.8.177.132/32 35.178.177.76/32 18.195.41.33/32 18.195.153.182/32 18.195.58.148/32 52.48.243.82/32 52.59.59.81/32 52.57.220.140/32 52.28.41.3/32 52.29.131.127/32 52.28.115.60/32 52.29.0.31/32 52.28.52.91/32 52.58.102.110/32 54.93.152.243/32 13.55.209.28/32 13.54.206.49/32 13.210.238.7/32 13.228.123.222/32 54.169.20.120/32 13.229.165.25/32 52.220.139.232/32 13.250.145.93/32 3.0.41.185/32 54.169.146.24/32 54.255.158.185/32 54.251.124.11/32 54.255.54.138/32 54.255.181.23/32 52.77.48.234/32 3.7.137.141/32 13.126.36.88/32 15.207.171.186/32 3.6.202.33/32 52.66.74.73/32 13.127.224.172/32 3.7.29.86/32 35.154.60.73/32 3.6.225.200/32 It is not working and reports This site can't be reached - ERR_CONNECTION_TIMED_OUT Connection to server refused or timed out. Please suggest. Thanks in Advance. Best Regards, Kaushal
we have two Deployment Servers, one has apps for all of our servers the other has apps for all of our workstations by mistake we rolled out some Universal Forwarders to the servers and pointed them ... See more...
we have two Deployment Servers, one has apps for all of our servers the other has apps for all of our workstations by mistake we rolled out some Universal Forwarders to the servers and pointed them at the wrong Deployment Server, so I went through all of them and pointed them at the correct Deployment Server by editing their C:\Program Files\SplunkUniversalForwarder\etc\system\local\deploymentclient.conf and restarting Splunk now some of those servers moved to the correct Deployment Server and are listed on it's Forwarder Management page while others still remain on the wrong Deployment Server, (it has been over a week) this makes no sense to me, what could I be missing, what did I do wrong? I am new to Splunk < 2 years experience  
Hi all,  I have a pivot that changes the number of columns based on a drop-down selection.  The first two columns remain consistent however the remaining columns can change (1st e.g. has 6 additiona... See more...
Hi all,  I have a pivot that changes the number of columns based on a drop-down selection.  The first two columns remain consistent however the remaining columns can change (1st e.g. has 6 additional columns with auto-generated column names whereas 2nd has 4 additional columns). E.g. 1 CartridgeType Cartridge E:::MCAS1 E:::MCAS2 S:::MCAS1 S:::MCAS2 S:::MCAS3 S:::MCAS4 user etf 4 4 4 4 4 4 product brd 4 4 5 5 5 5   E.g. 2 CartridgeType Cartridge E:::MCAS1 E:::MCAS2 D:::MCAS1 D:::MCAS2 user etf 4 4 4 4 product brd 4 4 5 5   Is it possible (purely through the html or css AKA through the 'Source' button when editing a dashboard) to highlight rows red if they have different values along a row in the dynamically generated columns? For instance, in e.g. 1 since the second row's mcas columns don't all have the same value, highlight the whole row. CartridgeType Cartridge E:::MCAS1 E:::MCAS2 S:::MCAS1 S:::MCAS2 S:::MCAS3 S:::MCAS4 user etf 4 4 4 4 4 4 product brd 4 4 5 5 5 5   The confusion I'm having here is due to i ) non-static column names ii) Number of columns can change in quantity based on the dropdown selection.   Any help would be hugely appreciated! I've tried looking at the sample dashboard however haven't been able to figure out a solution based of them and an unable to implement a js option.
I'm running what I believe to be a somewhat standard input, from the Splunk Linux TA.  I just realized for some hosts the time is in the future.  All other time for logs from these hosts is correct, ... See more...
I'm running what I believe to be a somewhat standard input, from the Splunk Linux TA.  I just realized for some hosts the time is in the future.  All other time for logs from these hosts is correct,  only the package.sh output is 15 minutes in the future. Time on the affected hosts is correct.  How can it have the wrong time when it was a script Splunk executed itself? Input:   [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/package.sh] sourcetype = package source = package interval = 86400 disabled = false index = os    
Hi, I'd like to create a visualization that shows trends between alerts that have been fired. The graph will show the frequency of a given range of alerts and how often they was triggered on the so... See more...
Hi, I'd like to create a visualization that shows trends between alerts that have been fired. The graph will show the frequency of a given range of alerts and how often they was triggered on the source file.   Thanks, Rob 
Hi, I'm trying to configure HEC in our indexer cluster which doesn't have any HFs. Could anyone tell me about the process? I also read some community answers and documents that we create the token... See more...
Hi, I'm trying to configure HEC in our indexer cluster which doesn't have any HFs. Could anyone tell me about the process? I also read some community answers and documents that we create the tokens in CM and distribute to indexers.But i'm quite new to such process. Any detail steps are very much appreciated.
Hi, User needs a link which has the splunk qurery and results He wants to attach the link to already existing dashboard panel. The share job link is active only for 7 days, is there a way to attac... See more...
Hi, User needs a link which has the splunk qurery and results He wants to attach the link to already existing dashboard panel. The share job link is active only for 7 days, is there a way to attach a permanent link to the dashboard ?
Our development teams have started using Microsoft Power Apps to build application quicker.  Some applications are having performance challenges and the teams are looking to install AppDynamics to mo... See more...
Our development teams have started using Microsoft Power Apps to build application quicker.  Some applications are having performance challenges and the teams are looking to install AppDynamics to monitor the application and assist them with resolving issues.  Has anyone else tried to do this?
I'm currently working on migration of a single-node installation to indexer and search head clusters. Specifically on migrating the scheduled searches and alerts to the SHC, the recommended way of d... See more...
I'm currently working on migration of a single-node installation to indexer and search head clusters. Specifically on migrating the scheduled searches and alerts to the SHC, the recommended way of doing this appears to be to use the Deployer to deploy an app containing the configuration to the members. I've tested it - it works - and if I do the initial deployment with the savedsearches.conf in the 'local' directory then it allows the SHC members to make updates via the UI using the standard sync mechanism, and those changes won't get overwritten by future deployments because they are in the 'local' directory. So far so good. But, I am curious - is there anything wrong architecturally with stopping all of the SHC members, replacing the SPLUNK_HOME/etc/apps/search/local/savedsearches.conf with the same, identical file copied from the single-node install on each member, and then bringing them up? I'm led to believe this would work, but I'm also not sure if it would cause any consistency issues with future updates and sync across the cluster. Why would I want to consider doing this? Mainly to avoid having to look for scheduled searches in two places ("search" and "migration" apps) when making updates via the UI, which will be the main way users will add and edit scheduled searches. Any other potential pitfalls with this approach? Or methods that would avoid having to maintain scheduled searches across two apps? I'm only asking here about the scheduled searches. Everything else has been migrated to the correct locations via the indexer/search head cluster mechanisms, so for the purpose of this question you can assume that everything else regarding field extractions, indexes, etc. for those scheduled searches has already been migrated and I just need to migrate the scheduled search definitions. Thanks in advance.
Hi, I have an use case where I have an if condition involving multiple comparisons. Based on its outcome, I  want to re-assign values in multiple fields. Consider below example: My fields are: A1, ... See more...
Hi, I have an use case where I have an if condition involving multiple comparisons. Based on its outcome, I  want to re-assign values in multiple fields. Consider below example: My fields are: A1, B1, C1, A2, B2, C2 and few other fields I have an if condition and when it is true to assign value as below and if false do nothing: A1=A2 B1=B2 C1=C2   Now my query is, right now if I want to do this, I would have to write 3 different eval commands like below doing exact same comparisons: | eval A1=if(<condition>, A2, A1)  | eval B1=if(<condition>, B2, B1)  | eval C1=if(<condition>, C2, C1)    Is there a way so that if I only use if once and when true, all three fields would get assigned value in one go. If there is a way, in terms of performance is above still better, I would be running this for more than hundred thousand records ?
Hi Splunk Gurus   Could you someone help me to resolve my Issue with timestamp extraction? The Issue is that when I want to create a sourcetype with custom timestamp via advanced configuration wh... See more...
Hi Splunk Gurus   Could you someone help me to resolve my Issue with timestamp extraction? The Issue is that when I want to create a sourcetype with custom timestamp via advanced configuration where I defined TIME_PREFIX as regular expression ^(?:[^\}\n]*\}){4},\{"\w+":"(?P<timestamp_ex>[^"]+) then timesstamp extraction is not working and I getting  error "failed to parse timestamp. Defaulting to file modtime." regular expresion I got from splunk field extraction. Why Splunk doesn't accept my regex which was generated by Slunk itself? and tested out via regex101.com where the expression is working.
Hi I install rss app on splunk   https://splunkbase.splunk.com/app/278/ https://splunkbase.splunk.com/app/2646/ FYI: seems not compatible with splunk version 8, i am using splunk version 8 Aft... See more...
Hi I install rss app on splunk   https://splunkbase.splunk.com/app/278/ https://splunkbase.splunk.com/app/2646/ FYI: seems not compatible with splunk version 8, i am using splunk version 8 After installation completed, when I open app i’ve got this error: 2021-07-22 16:37:00,416 INFO [60f95f64637f60d7ebff50] startup:139 - Splunk appserver version=8.0.4 build=767223ac207f isFree=False isTrial=False 2021-07-22 16:37:00,646 WARNING [60f95f64637f60d7ebff50] appnav:404 - An unknown view name "home" is referenced in the navigation definition for "rssjava". 2021-07-22 16:37:00,647 WARNING [60f95f64637f60d7ebff50] appnav:404 - An unknown view name "hosts" is referenced in the navigation definition for "rssjava". 2021-07-22 16:37:00,647 WARNING [60f95f64637f60d7ebff50] appnav:404 - An unknown view name "metrics" is referenced in the navigation definition for "rssjava". 2021-07-22 16:37:00,648 WARNING [60f95f64637f60d7ebff50] appnav:404 - An unknown view name "settings" is referenced in the navigation definition for "rssjava". 2021-07-22 16:37:00,653 INFO [60f95f64637f60d7ebff50] error:321 - Masking the original 404 message: 'Splunk cannot find the "None" view.' with 'Page not found!' for security reasons   any idea?  Thanks 
Hi I have the following JSON object. I would like to be able to ultimately create a bar chart with the following:   X-Axis: Animal type ie dog, cat, chicken..... Y-Axis: The length of animal's a... See more...
Hi I have the following JSON object. I would like to be able to ultimately create a bar chart with the following:   X-Axis: Animal type ie dog, cat, chicken..... Y-Axis: The length of animal's array, this example, dog=2 cat=3 chicken=1     { "data": { "animals": { "dog": [{"name": "rex", "id": 1}, {"name": "tom", "id": 2}], "cat": [{"name": "rex", "id": 3}, {"name": "tom", "id": 4}, {"name": "sam", "id": 5}], "chicken": [{"name": "rex", "id": 6}] } } }       I'm new to Splunk so apologies but I'm not sure where to even begin   Thanks in advance for any help
Hello, I am struggling to convert total number of metric value from number into percentage of total value, in this case is browser type -- and it should be shown in time series bucket function, At ... See more...
Hello, I am struggling to convert total number of metric value from number into percentage of total value, in this case is browser type -- and it should be shown in time series bucket function, At the moment, I am able to show based on number using following queries in time series (see attached picture)  SELECT series(eventTimestamp, '1m'), count(browser) AS "Firefox" FROM browser_records WHERE browser = "Firefox" SELECT series(eventTimestamp, '1m'), count(browser) AS "Non-Firefox" FROM browser_records WHERE browser != "Firefox" SELECT series(eventTimestamp, '1m'), count(*) AS "Total" FROM browser_records However, I am unable to convert it into percentage (%). I know there is filter function e.g. : SELECT 100*filter(count(*), browser = "Firefox") / count(*) AS "% Firefox" FROM browser_records or SELECT 100*filter(count(*), browser != "Firefox") / count(*) AS "% Non-Firefox" FROM browser_records BUT it will only return single value, not in time series as I expected.   How to combine series bucketing function and filter function to get percentage number browser in time series? Anyone has experience before?
Hi Guys, I have a requirement where I need to monitor some URLs for healthCheckapi using splunk. Tried creating a input like the one below in a Website Monitoring app: [web_ping://PROD - SERVER_NA... See more...
Hi Guys, I have a requirement where I need to monitor some URLs for healthCheckapi using splunk. Tried creating a input like the one below in a Website Monitoring app: [web_ping://PROD - SERVER_NAME APP_NAME] host = SERVER NAME index = prod_website_monitoring interval = 15m title = PROD - SERVER_NAME APP_NAME url = http://server:8092/api/healthCheckapi but it is not working. I am getting an error Cannot GET /healthCheckapi which suggests that this need to use POST instead of GET request. Can someone help me with the inputs? Thanks,
Hi All, I am trying to understand if the standard integration to ServiceNow method would also include the tags assigned to the AppD entity. I have read the docs and KB but cannot find if that i... See more...
Hi All, I am trying to understand if the standard integration to ServiceNow method would also include the tags assigned to the AppD entity. I have read the docs and KB but cannot find if that is the case? Regards, Ray.
I'm searching for the updated Business Value webinar.  Unfortunately, the link for session by Doug May is no longer available, even if you register with an acceptable business email. The session ad... See more...
I'm searching for the updated Business Value webinar.  Unfortunately, the link for session by Doug May is no longer available, even if you register with an acceptable business email. The session addressed: How your peers are messaging the business value of Splunk software in their companies How free and easy-to-use tools can help you document your Splunk business value How to speed adoption, increase business impact, and highlight your efforts Any suggestions on related webinars?
If I run the below query for last 7 days, and if there is no data in logs matching condition index=abc "searchTerm" for day1, then the results are showing for day2 to day7. But I want a row in resul... See more...
If I run the below query for last 7 days, and if there is no data in logs matching condition index=abc "searchTerm" for day1, then the results are showing for day2 to day7. But I want a row in resultset for day1 as well, if no data then with TotalResults as 0 for day1. index=abc "searchTerm"  | bucket _time span=1d | stats count as TotalResults by _time | makecontinuous _time | fillnull TotalResults Please help  
Hi, I'm reviewing the documentation of signal fx api, and the get chart by id api will give a lot of chart properties also with programText. /api/charts/latest#endpoint-get-chart-using-chart-id An... See more...
Hi, I'm reviewing the documentation of signal fx api, and the get chart by id api will give a lot of chart properties also with programText. /api/charts/latest#endpoint-get-chart-using-chart-id And I'm wondering if signal fx provides some ways that we can use these properties and the data we fetched by execute the programText to generate a chart, or will signal fx provide some ways to export the chart in html/image format, or will it provide framework like splunk js stack so we can easily bring what we want to our web apps. Thanks!