All Topics

Top

All Topics

If I remove edit_user as a capability from a Splunk role, will it impact their ability to edit their user preferences?  Is editing user preferences linked to any role capabilities in the first place?
I've got an on-premises Splunk deployment running Enterprise 8.1.2. I keep having a recurring issue where the users report that their searches are being queued due to disk quota. This search cou... See more...
I've got an on-premises Splunk deployment running Enterprise 8.1.2. I keep having a recurring issue where the users report that their searches are being queued due to disk quota. This search could not be dispatched because the role-based disk usage quota of search artifacts for user "jane.doe" has been reached (usage=7757MB, quota=1000MB). Use the Job Manager to delete some of your search artifacts, or ask your Splunk administrator to increase the disk quota of search artifacts for your role in authorize.conf. So naturally I go to the Job Manager to see what's up, but what I keep finding is that the Jobs don't even almost approach the quota. This is the second time this issue has come up. Previously I did a bunch of digging around but was never able to find any record of what was actually using up the quotas. That was a while back so unfortunately I don't have notes on that. I ended up just increasing the quota for the user's role and things started working again. Now that it's happening again, I figured I'd try posting here to see if anyone has any advice on how to find what's using up the quota.
Hi Team,                   Recently we have received a notification from Splunk to upgrade to Victoria Experience. And I want to know Is it good to upgrade to Victoria or else stay with the classic... See more...
Hi Team,                   Recently we have received a notification from Splunk to upgrade to Victoria Experience. And I want to know Is it good to upgrade to Victoria or else stay with the classic experience.??   Dear Splunk Cloud Platform Customer, We are reaching out to inform you of an upcoming maintenance window for Splunk to deliver a cloud migration for your stack: ***** This migration is known as the Victoria Experience. This is the message we received from Splunk team, does anyone have any idea regarding this classic and Victoria experience? Please do suggest which one is better to go with
All, I have an index with some fields like appId and responsetime. I also have a dataset where the appId is same, but in this file I have a propername linked with the appId So as example INDEX O... See more...
All, I have an index with some fields like appId and responsetime. I also have a dataset where the appId is same, but in this file I have a propername linked with the appId So as example INDEX OUTPUT appId, responsetime 202, 1200 OUTPUT file appId, serviceName 202, serviceA I am looking for a syntax where I can have the output: serviceA, responseTime 202, 1200 And on top of this, I want to create a chart out of this. I was playing around with a join query and was able to create a table index=xx | dedup appId | eval duration = RT - FT | join type=inner appId [|inputlookup tmpfile.csv | rename serviceA as URL] | table appId serviceA responsetime |where appId = appId BUT, I can not create charts with avg(responseTime).   Can someone help?   Thanks. Amit
Hi there, Is it possible to add static thresholds on the Gauge Widget? The Dash Studio does provide the thresholds but it does not provide the metric expression with variable declarations. Regards... See more...
Hi there, Is it possible to add static thresholds on the Gauge Widget? The Dash Studio does provide the thresholds but it does not provide the metric expression with variable declarations. Regards, Frans 
We're looking to create an alert based on the number of failures based on a certain field (clientIP) per certain time frame. here is the search so far: sourcetype="access_combined" POST 401 "/cas/l... See more...
We're looking to create an alert based on the number of failures based on a certain field (clientIP) per certain time frame. here is the search so far: sourcetype="access_combined" POST 401 "/cas/login" | stats count by clientip Basically, we only want to be alerted when the number of events from any unique clientIP hits 10 per minute.  We have the alert to trigger if the number of results is greater than 9.
My understanding is MLTK is an out of the box app. In that case if my Splunk instance is upgraded, will the oob apps (MLTK) will get upgraded automatically too ? Or we need to manually update to late... See more...
My understanding is MLTK is an out of the box app. In that case if my Splunk instance is upgraded, will the oob apps (MLTK) will get upgraded automatically too ? Or we need to manually update to latest version from Splunkbase like other apps ?
Hello Team, I am using Splunk REST API's to integrating Splunk with CPI. For Get token configuration details endpoint i am getting error. Please help me whether this URL is correct or wrong. Sp... See more...
Hello Team, I am using Splunk REST API's to integrating Splunk with CPI. For Get token configuration details endpoint i am getting error. Please help me whether this URL is correct or wrong. Splunk  document URL: https://localhost:8089//servicesNS/nobody/system/data/inputs/http/http%3A%252F%252F%22myapp%22/http/%252Fvar%252Flog What is this %3A%252F%252F%22myapp%22/http/%252Fvar%252Flog Please make me understand or share me the correct URL Thanks, Venkata
Hi all, I'm an infrastructure guy and have no clue about SPLUNK. The only thing I knew is that SPUNK is writing a lot of unaligned 4K IO. Also in some circumstances 512K sequential writes. Is there... See more...
Hi all, I'm an infrastructure guy and have no clue about SPLUNK. The only thing I knew is that SPUNK is writing a lot of unaligned 4K IO. Also in some circumstances 512K sequential writes. Is there any parameter that can be set the tell SPLUNK to write 4K aligned IOs? Thanks. Fred
we have some devices for Power Distribution Units and UPS"s for DC team planning to ingest into splunk to monitor is anyway to monitor those data in splunk
I have a modular input, which accepts AWS credentials while configuring input for the addon. In that secret key field is password type field. So after inputs are saved into input.conf, it creates a e... See more...
I have a modular input, which accepts AWS credentials while configuring input for the addon. In that secret key field is password type field. So after inputs are saved into input.conf, it creates a encrypted data for secret key and stored in password.conf.  this code to get decrypted data while processing events was working fine for the addon.       helper.get_arg('access_key')       but After the addon upgrade using addon builder V-4.X.X the same code  returning ***** instead of actual value. what might be the issue? is something needs to do before upgrading? or is there any other ways to get decrypted data from password.conf file?
Hello,  I have 2 dashboards, the first for display reference with the error, the second to search specific reference and display all steps of work (there is a time selector and reference selector o... See more...
Hello,  I have 2 dashboards, the first for display reference with the error, the second to search specific reference and display all steps of work (there is a time selector and reference selector one the second dashbord, the reference selector feeds a variable used on all elements of the dashboard) I would like to add a link in my first dashboard to push to the second with an automatic filling.  Does is it possible ?  Could you help me please ? 
Hi folks, We use the latest cluster agent and auto-instrument for .net core we have: docker.io/appdynamics/dotnet-core-agent:latest APPDYNAMICS_AGENT_REUSE_NODE_NAME: "true" APPDYNAMICS_AGENT_R... See more...
Hi folks, We use the latest cluster agent and auto-instrument for .net core we have: docker.io/appdynamics/dotnet-core-agent:latest APPDYNAMICS_AGENT_REUSE_NODE_NAME: "true" APPDYNAMICS_AGENT_REUSE_NODE_NAME_PREFIX: node but for node.js agent: docker.io/appdynamics/nodejs-agent:22.5.0-14-stretch-slim it doesn't work. Each restart agent creates a new node with increment +1 in name, how to fix it? ^Post edited by @Ryan.Paredez for minor formatting changes.
Hi All, I am trying to create a table out of the log below: log: ServerA ServerB ServerC ADFILES41-6.2-4 not_available ADFILES41-6.2-4.2 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41HF-5.1... See more...
Hi All, I am trying to create a table out of the log below: log: ServerA ServerB ServerC ADFILES41-6.2-4 not_available ADFILES41-6.2-4.2 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 ADM42-5.11-4 ADM42-5.11-4 ADM42-5.11-4 ADM42HF-5.11HF03-4 ADM42HF-5.11HF03-4 not_available TRA42-5.11-4 TRA42-5.11-4 not_available not_available ADFILES42-6.2-4 not_available not_available not_available TRA42-5.13-4 Here you can see that the 1st line gives the server names. 2nd, 3rd,4th and so on lines are applications available in the server. For eg. From 2nd line you can see that the application ADFILES41-6.2-4 is available in A&C but not in B. Similarly from 9th line you can see that the application TRA42-5.13-4 is available in C but not in A&B. So the requirement is to create a table in the below way to show if any servers is missing any application. Server ServerA ServerB ServerC Application ADFILES41-6.2-4 not_available ADFILES41-6.2-4 Application ADM41-5.10.1-4 ADM41-5.10.1-4 ADM41-5.10.1-4 Application ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 ADM41HF-5.10.1HF004-4 Application ADM42-5.11-4 ADM42-5.11-4 ADM42-5.11-4 Application ADM42HF-5.11HF03-4 ADM42HF-5.11HF03-4 not_available Application TRA42-5.11-4 TRA42-5.11-4 not_available Application not_available ADFILES42-6.2-4 not_available Application not_available not_available TRA42-5.13-4   Please help me to create a query to get the table in the desired manner. Any help on the problem would be highly appreciated. Thank you All..!!
  (Search head cluster/indexer cluster environment) I have written a custom search, using the template provided by Splunk for streaming commands. In an attempt to force the search to run on the... See more...
  (Search head cluster/indexer cluster environment) I have written a custom search, using the template provided by Splunk for streaming commands. In an attempt to force the search to run on the search heads and not on the indexers, I added the @Configuration(local=True) bit to the code.       from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators @Configuration(local=True) class StreamingCSC(StreamingCommand):       I got that change from here: https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/pythonclassescustom/ but the search still dies. If i modify my search to put the |sort something | mycustomcommand, the search is forced to run locally and it works fine. What am I doing wrong in trying to keep this search off the indexers and only on the search head cluster?
Hi, I would like to return the rex "field" from a subquery so I can print it out. How do I do that? index=... "some text" | sort - _time [search message | rex "\[(?<number>\d{3,5})" | rex "(?<field>... See more...
Hi, I would like to return the rex "field" from a subquery so I can print it out. How do I do that? index=... "some text" | sort - _time [search message | rex "\[(?<number>\d{3,5})" | rex "(?<field>\w{2,4}@\d{1,4})" | return field] | dedup number | table _time number field In the result table the column field is always empty. Thanks for help!
Hi, I'm using splunk web to check some searches/alerts: 1. | rest /servicesNS/-/-/saved/searches/ splunk_server=local | table title <-- displays a list of saved searches then I pick one from... See more...
Hi, I'm using splunk web to check some searches/alerts: 1. | rest /servicesNS/-/-/saved/searches/ splunk_server=local | table title <-- displays a list of saved searches then I pick one from the list and launch: 2. rest /servicesNS/-/-/saved/searches/alert_without_white_spaces splunk_server=local. <-- and it works But when querying for a differently named alert I get an error: 3. rest /servicesNS/-/-/saved/searches/alert with white spaces splunk_server=local. <-- does not work - error message: Error in 'rest' command: Invalid argument: '-' 3a) rest /servicesNS/-/-/saved/searches/'alert with white spaces' splunk_server=local.   <-- does not work - error message: Error in 'rest' command: Invalid argument: '-' 3b) rest /servicesNS/-/-/saved/searches/"alert with white spaces" splunk_server=local.   <-- does not work - error message: Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/-/-/saved/searches/alert with white spaces?count=0 from server=https://127.0.0.1:8089 - Not Found 3d) rest /servicesNS/-/-/saved/searches/alert\ with\ white\ spaces splunk_server=local.  - error message: Error in 'rest' command: Invalid argument: '-\' 3e) | eval alert1="alert with white spaces"          | rest /servicesNS/-/-/saved/searches/alert1 - error message (splunk didn't use the variable value but the variable name) Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/-/-/saved/searches/alert1?count=0 from server=https://127.0.0.1:8089 - Not Found Is there a way to use variables or to query for a search name containing white spaces without getting an error ?
Hi Team, I'm using Splunk cloud  REST API "/services/collector/event"  used to post the data to Splunk cloud .what is the Get API for fetch the data ?
Hey Community Need guidance with below scenario. A user will provide an IP address as input. I want that last two octets of the input IP should be compared with the last two octets of the Source IP... See more...
Hey Community Need guidance with below scenario. A user will provide an IP address as input. I want that last two octets of the input IP should be compared with the last two octets of the Source IP field and the matched results should be returned. For example  input_ip="1.2.3.4" src_ip="4.5.3.4" This should returns the results as the last two octets are matching. I have tried replace first two octets with * using regex and strcat however, it doesn't works for me.
Hi All, Please help me with the splunk alerts for below scenario   Thanks, Vijay Sri S