All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

when I run below query I am not able to get the sla_violation_count index=* execution-time=* uri="v1/validatetoken"  | stats count as total_calls, count(eval(execution-time > SLA)) as sla_violation... See more...
when I run below query I am not able to get the sla_violation_count index=* execution-time=* uri="v1/validatetoken"  | stats count as total_calls, count(eval(execution-time > SLA)) as sla_violation_count total_calls are displaying as 1 but not able to get sla_violation_count pasting the results below for the reference { datacenter: aus env: qa execution-time: 2145 thread: http-nio-8080-exec-2 uri: v1/validatetoken uriTemplate: v1/validatetoken }   Thanks in advance
I have a fairly common Splunk deployment, 1 SH, 1 DS and two Indexers. I want to upgrade from one Linux distro to another. Any experiences? I only have this  https://docs.splunk.com/Documentation/... See more...
I have a fairly common Splunk deployment, 1 SH, 1 DS and two Indexers. I want to upgrade from one Linux distro to another. Any experiences? I only have this  https://docs.splunk.com/Documentation/Splunk/9.1.4/Installation/MigrateaSplunkinstance A documnetation which is certainly lacking!
I'm currently experiencing difficulties integrating my Node.js application with AppDynamics. Despite following the setup instructions, I'm encountering issues with connecting my application to the Ap... See more...
I'm currently experiencing difficulties integrating my Node.js application with AppDynamics. Despite following the setup instructions, I'm encountering issues with connecting my application to the AppDynamics Controller.
How would I incorporate an average of genSecondsDifference over a 24 hour period? for 7 days?
Hi Rich, How would I incorporate an average of genSecondsDifference over a 24 hour period? for 7 days?
Has anyone implemented OCSF model in your Splunk security practise. Got a rough idea in this and about to start the adoption of OCSF in our platform. As of now only identified the fields as per OCSF ... See more...
Has anyone implemented OCSF model in your Splunk security practise. Got a rough idea in this and about to start the adoption of OCSF in our platform. As of now only identified the fields as per OCSF MODEL. Yet few fields are missing still. How to check for new fields if it is yet to introduce in OCSF model. Any pros and cons on implementing this. And any tips would be helpful based on real time implementation. Thanks in advance
When monitoring Azure Integration Services with AppDynamics, start by instrumenting your Azure components with the AppDynamics SDK. Configure service endpoints to monitor response times and error rat... See more...
When monitoring Azure Integration Services with AppDynamics, start by instrumenting your Azure components with the AppDynamics SDK. Configure service endpoints to monitor response times and error rates. Consider custom instrumentation for your on-premise .NET application for end-to-end visibility.
Hi @shakti , open the Monitoring Console app [Settings > Monitoring Console > Resource Usage > Resource Usage: Instance ] and you'll have all the information and searches you requested. Ciao. Gius... See more...
Hi @shakti , open the Monitoring Console app [Settings > Monitoring Console > Resource Usage > Resource Usage: Instance ] and you'll have all the information and searches you requested. Ciao. Giuseppe
Hello,   Can anyone please provide me a search query to display the cpu usage regarding splunk instances like indexer, search head , deployment server etc?
Hello I'm  facing an issue with my AppDynamics Controller where I'm not able to retrieve certain data metrics due to an internal server error. When I try  to access performance metrics for a specifi... See more...
Hello I'm  facing an issue with my AppDynamics Controller where I'm not able to retrieve certain data metrics due to an internal server error. When I try  to access performance metrics for a specific application or tier, I encounter the following error message: Internal Server Error: Unable to Retrieve Data Metrics This error is causing our ability to effectively monitor and troubleshoot performance issues within our application. After reviewing the Controller logs and configuration settings, I haven't been able to pinpoint the exact cause of this issue. To provide some info , we're running AppDynamics Controller version 4.10.3 on a Linux-based server environment. We have multiple applications and tiers instrumented, and this error seems to be there consistently across all of them. Could someone with experience in the AppDynamics Controller offer help to  resolve this issue.  Any suggestion can provide would be greatly appreciated. Thank you for your time and support! Thank you stevediaz
Hi @tjlavarias24  Did you find the Solution,  I do Have the Same requirement, Can you Help me with your Solution. Thanks in Advance.
Hi @CSReviews , as @marnall said, running a search, yu probably modify the raw data. to export data. the best approach is to run a search without table command, only the main search and then export... See more...
Hi @CSReviews , as @marnall said, running a search, yu probably modify the raw data. to export data. the best approach is to run a search without table command, only the main search and then export data in raw format. There's only one issue: you have to run this search separating events fo index, sourcetype and host, and then import data assigning the correct values, otherwise you cannot assign the correct values to these fields. Ciao. Giuseppe
Hi @mshakeb , if you haven't an Indexer Cluster, you have to: identify all the indexes.conf files that contain the indexes information, stop Splunk, manually modify the conf file $SPLUNK_HOME/et... See more...
Hi @mshakeb , if you haven't an Indexer Cluster, you have to: identify all the indexes.conf files that contain the indexes information, stop Splunk, manually modify the conf file $SPLUNK_HOME/etc/splunk-launch.conf replacing the $SPLUNK_DB value with the new value, check if in the above indexes.conf files there's some location that don't use $SPLUNK_DB, if there's, change the location to the new one, manually move the folders from the old location to the new one, restart Splunk. For more infos, you can see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Moveanindex Ciao. Giuseppe
Hi @user1  I do have the same requirement, Did you find the Solution for it. Can you Please share the details about it. Thanks in Advance.
I tried setting up cluster map, I have 93 country codes to display. However, the map shows only 10 colors and after that it shows only repeated Violet color. Further, I have used "| geostats latfield... See more...
I tried setting up cluster map, I have 93 country codes to display. However, the map shows only 10 colors and after that it shows only repeated Violet color. Further, I have used "| geostats latfield=Lat longfield=Long count by Country" count by country and it show's only 10 countries and for other countries it shows "Others".
I am able to push cloudwatch metrics by selecting streaming and selecting json as datatype of output. Additionally I used inbuilt lambda transformation for cloudwatch metrics. I selected source type ... See more...
I am able to push cloudwatch metrics by selecting streaming and selecting json as datatype of output. Additionally I used inbuilt lambda transformation for cloudwatch metrics. I selected source type aws:firehose:json. The data is seen in Splunk as json data which is not searchable. How to get the data in searchable format?
Mmm, that's odd because I use that technique to manipulate _time - If you could find a simple example of _raw data where that is the case - perhaps by limiting the search just to pick up an event of ... See more...
Mmm, that's odd because I use that technique to manipulate _time - If you could find a simple example of _raw data where that is the case - perhaps by limiting the search just to pick up an event of each type - I'd be really interested to see. If the date format for the 2023 data is not as per the strptime format syntax that would cause a problem as it would be later - that would be my suspicion. If you can do a simple search for that 2023 data and do this  | eval orig_time=strftime(_time, "%F %T.%Q") | eval _time=strptime(...) | table _time orig_time that may show the difference
Got it remediated by including gcusello suggestion of | eval latestDeployed_version=Deployed_Data_time."|".version and used that  field in your stats statement as max value instead of latest. This wo... See more...
Got it remediated by including gcusello suggestion of | eval latestDeployed_version=Deployed_Data_time."|".version and used that  field in your stats statement as max value instead of latest. This worked well and validated to be fine. Thanks a lot to both
Hi  Thanks, it works for the sample data which I have given, but the actual data I pushed in splunk is not as per the Deployed date timestamp, I have pushed old data (2023 year) lately and new data ... See more...
Hi  Thanks, it works for the sample data which I have given, but the actual data I pushed in splunk is not as per the Deployed date timestamp, I have pushed old data (2023 year) lately and new data (2024 ) first. Hence for some columns the results are coming as per the data pushed in to splunk time. Any work around can be applied?
Hi @theprophet01, To get a summary of entities with their info tags you can run the excellent query by sandrosov_splun: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity r... See more...
Hi @theprophet01, To get a summary of entities with their info tags you can run the excellent query by sandrosov_splun: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), alias_fields=spath(value,"identifier.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [| mvexpand alias_fields | eval field_value = spath(value,alias_fields."{}"), field_type="alias" | rename alias_fields as field_name ] | appendpipe [| where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where isnotnull(field_type) | table entity_id entity_name entity_title field_name field_value field_type This will give you results similar to this: To list the services, you can call the "getservice" custom command that comes with ITSI: | getservice | table title, serviceid, description, service_tags, kpis, service_depends_on, services_depending_on_me, enabled, base_service_template_id, entity_rules, * That gives you these results:   Cheers, Daniel