All Posts

Top

All Posts

Hi there, make sure the user role has the required capabilities: edit_tokens_settings, which turns token authentication on or off edit_tokens_all, which lets you create, view, and manage token... See more...
Hi there, make sure the user role has the required capabilities: edit_tokens_settings, which turns token authentication on or off edit_tokens_all, which lets you create, view, and manage tokens for any user on the instance edit_tokens_own, which lets you create, view, and manage tokens for yourself https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Security/Setupauthenticationwithtokens#Prerequisites_for_creating_and_configuring_tokens   Hope this helps ... cheers, MuS    
Hello support team, we urgently need to update the Illumio logo here: https://splunkbase.splunk.com/app/3658 It would also be helpful to update the Illumio logo in any other locations throughout Sp... See more...
Hello support team, we urgently need to update the Illumio logo here: https://splunkbase.splunk.com/app/3658 It would also be helpful to update the Illumio logo in any other locations throughout Splunk marketing materials.  Updated Illumio logos can be downloaded at the bottom of this page: https://illumio.frontify.com/d/VMEFDUaDvuv5/design-system#/design-system/logo Please contact me, Jacy, via PM for additional support.  
I don't see a create new token option under Settings>Token. Anyone else having this issue? Not sure if its a permission related issue, but others in the team also can't create a new token.
Try setting your alert to look back at least 15 minutes and use a search like this | eval starttime=if(event="Starting",_time,null()) | eval stoptime=if(event="Stopping",_time,null()) | sort 0 _time... See more...
Try setting your alert to look back at least 15 minutes and use a search like this | eval starttime=if(event="Starting",_time,null()) | eval stoptime=if(event="Stopping",_time,null()) | sort 0 _time desc | streamstats time_window=15m latest(stoptime) as nextStop | eval alert=if(isnull(nextStop) and time() - starttime > 15*60, "missing", null())
Thanks! This works wonders
So the Events basically have a start every 15 mins We have one event saying Starting and when it finishes within the 15 mins then it will says Stopped    once I know that time then I can alert the... See more...
So the Events basically have a start every 15 mins We have one event saying Starting and when it finishes within the 15 mins then it will says Stopped    once I know that time then I can alert the team IF it takes over 15 mins since it could be an issue
Thanks for clarifying.  I take it this is on Splunk Cloud.  Try this query | rest splunk_server=local /services/cluster_blaster_indexes/sh_indexes_manager | search * | where isnull(archiver.selfSt... See more...
Thanks for clarifying.  I take it this is on Splunk Cloud.  Try this query | rest splunk_server=local /services/cluster_blaster_indexes/sh_indexes_manager | search * | where isnull(archiver.selfStorageProvider) | table title *self*
With that I mean by that are indexes which do not have an S3 bucket attached for backup. When you go to Settings > Indexes and the list is displayed, there's a column called "Self storage". I want t... See more...
With that I mean by that are indexes which do not have an S3 bucket attached for backup. When you go to Settings > Indexes and the list is displayed, there's a column called "Self storage". I want to configure a dashboard that displays all the indexes without self storage attached.
We have a custom dashboard in Splunk that has a few filters, one of which is a multiselect. This dashboard allows users to perform CRUD operations with POA&Ms. The multiselect in question lists all P... See more...
We have a custom dashboard in Splunk that has a few filters, one of which is a multiselect. This dashboard allows users to perform CRUD operations with POA&Ms. The multiselect in question lists all POA&M statuses that have been previously created, filtering the results displayed in the table. The filter works fine for searching results for the table. The issue is that if someone creates a new POA&M with a status that hasn't been used yet, i.e. "Closed", the page must be refreshed for the multiselect to execute the search powering it and display "Closed" as an option. Is there a way to "refresh" the multiselect with Javascript after a new POA&M is created? The POA&M CRUD operations are performed with JS and Python btw. Here's the XML of the multiselect for reference:  
How are the two events linked? Can there be more that one "start" before any "stops"? Can "start/Stop" pair be intertwined? How frequently do you want to check?
So I have an Index which contains the following "Starting iteration"on 1 event and "Stopping iteration" on another event I want to get the time taken from event 1 to event 2. And if over 15 mins t... See more...
So I have an Index which contains the following "Starting iteration"on 1 event and "Stopping iteration" on another event I want to get the time taken from event 1 to event 2. And if over 15 mins then I can setup an alert to warn me   
The <<FIELD>> keyword is a text substitution and you still need quotes so try this | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch("<<FIELD>>", ip_address), ip... See more...
The <<FIELD>> keyword is a text substitution and you still need quotes so try this | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch("<<FIELD>>", ip_address), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR  
Guide to Monitoring URLs with Authentication Using Splunk AppDynamics and Python Monitoring URLs are an important part of your FullStackMonitoring. Splunk AppDynamics lets you monitor URLs with... See more...
Guide to Monitoring URLs with Authentication Using Splunk AppDynamics and Python Monitoring URLs are an important part of your FullStackMonitoring. Splunk AppDynamics lets you monitor URLs with different authentication. In this article, we will create a simple URL with a username and password. Afterwards, we will monitor it using AppDynamics Machine Agent. Create a Simple API with Python (Flask) Install Flask: pip install flask​ Create the API: Save the following Python code to a file, e.g., basic_auth_api.py : from flask import Flask, request, jsonify from flask_httpauth import HTTPBasicAuth app = Flask(__name__) auth = HTTPBasicAuth() # Dummy users for authentication users = { "user1": "password123", "user2": "securepassword", } @auth.get_password def get_pw(username): return users.get(username) @app.route('/api/data', methods=['GET']) @auth.login_required def get_data(): return jsonify({"message": f"Hello, {auth.username()}! Here is your data."}) if __name__ == '__main__': app.run(debug=True, port=5000) Run the API: Start the server by running: python basic_auth_api.py Test the API: Use curl to access the API: curl -u user1:password123 http://127.0.0.1:5000/api/data You should see a response like this: { "message": "Hello, user1! Here is your data." }​ Install Machine Agent You can install the Machine agent as recommended here Setup URL Monitoring Extension Clone the Github Repo: git clone https://github.com/Appdynamics/url-monitoring-extension.git​ cd url-monitoring-extension​ Download and install Apache Maven which is configured with  Java 8  to build the extension artifact from the source. You can check the Java version used in Maven using command  mvn -v  or  mvn --version . If your maven is using some other Java version then please download Java 8 for your platform and set JAVA_HOME parameter before starting maven. Run below in url-monitoring-extension directory mvn clean install Go into the target directory and copy the UrlMonitor-2.2.1.zip, Afterwards unzip the content inside <MA-Home>/monitors/folder cd target/ mv UrlMonitor-2.2.1.zip /opt/appdynamics/machine-agent/monitors unzip UrlMonitor-2.2.1.zip​ This will create an UrlMonitor directory inside the Monitors folder. Monitor the URL Inside the UrlMonitor folder, edit the config.yml file Under sites, I have added: sites: - name: AppDynamics url: http://127.0.0.1:5000/api/data username: user1 password: password123 authType: BASIC​ Change: metricPrefix: "Custom Metrics|URL Monitor|"​ Now, All you need to do is Start your Machine Agent again. Afterward, you can see this URL monitor in your AppDynamics Controller.
What do you mean by "do not have a self storage attached"?  What problem are you trying to solve?
Hi, I'm trying to get a query for a table containing all the indexes that do not have a self storage attached, but I couldn't find anything useful. Does anyone has an idea of how to do it?   Thanks!
thanks a lot, that was the solution ! 
I'm seeing hundreds of these errors in the internal splunkd logs 01-16-2025 12:05:00.584 -0600 ERROR UserManagerPro [721361 SchedulerThread] - user="nobody" had no roles Is this a known bug? I'... See more...
I'm seeing hundreds of these errors in the internal splunkd logs 01-16-2025 12:05:00.584 -0600 ERROR UserManagerPro [721361 SchedulerThread] - user="nobody" had no roles Is this a known bug? I'm guessing knowledge objects with no owner defined is causing this. It's annoying because it fills the internal logs with noise. Is there an easy workaround without having to re-assign all objects without a valid owner/?
Second point I didn't get you. We have a seperate syslog server where UF is installed and from there logs will be forwarded to our DS. what can I do now? Do I need to give props.conf on both deploye... See more...
Second point I didn't get you. We have a seperate syslog server where UF is installed and from there logs will be forwarded to our DS. what can I do now? Do I need to give props.conf on both deployer and forwarder?
Hi @splunklearner , the props.conf must be deployed to the Search Heads (using the SHC-Deployer if you have a cluster). and to the Forwarder that ingest logs, using the DS. Ciao. Giuseppe
Hello, I wanted to know where I should keep this attribute KV_MODE=json to extract the json fields automatically? In Deployment server or manager node or deployer? We have props.conf in a app in DS... See more...
Hello, I wanted to know where I should keep this attribute KV_MODE=json to extract the json fields automatically? In Deployment server or manager node or deployer? We have props.conf in a app in DS. DS push that app to manager node. And manager will distribute that app to peer nodes. Can I add this in that props.conf? Or any alternative please suggest.