All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Has anyone implemented OCSF model in your Splunk security practise. Got a rough idea in this and about to start the adoption of OCSF in our platform. As of now only identified the fields as per OCSF ... See more...
Has anyone implemented OCSF model in your Splunk security practise. Got a rough idea in this and about to start the adoption of OCSF in our platform. As of now only identified the fields as per OCSF MODEL. Yet few fields are missing still. How to check for new fields if it is yet to introduce in OCSF model. Any pros and cons on implementing this. And any tips would be helpful based on real time implementation. Thanks in advance
When monitoring Azure Integration Services with AppDynamics, start by instrumenting your Azure components with the AppDynamics SDK. Configure service endpoints to monitor response times and error rat... See more...
When monitoring Azure Integration Services with AppDynamics, start by instrumenting your Azure components with the AppDynamics SDK. Configure service endpoints to monitor response times and error rates. Consider custom instrumentation for your on-premise .NET application for end-to-end visibility.
Hi @shakti , open the Monitoring Console app [Settings > Monitoring Console > Resource Usage > Resource Usage: Instance ] and you'll have all the information and searches you requested. Ciao. Gius... See more...
Hi @shakti , open the Monitoring Console app [Settings > Monitoring Console > Resource Usage > Resource Usage: Instance ] and you'll have all the information and searches you requested. Ciao. Giuseppe
Hello,   Can anyone please provide me a search query to display the cpu usage regarding splunk instances like indexer, search head , deployment server etc?
Hello I'm  facing an issue with my AppDynamics Controller where I'm not able to retrieve certain data metrics due to an internal server error. When I try  to access performance metrics for a specifi... See more...
Hello I'm  facing an issue with my AppDynamics Controller where I'm not able to retrieve certain data metrics due to an internal server error. When I try  to access performance metrics for a specific application or tier, I encounter the following error message: Internal Server Error: Unable to Retrieve Data Metrics This error is causing our ability to effectively monitor and troubleshoot performance issues within our application. After reviewing the Controller logs and configuration settings, I haven't been able to pinpoint the exact cause of this issue. To provide some info , we're running AppDynamics Controller version 4.10.3 on a Linux-based server environment. We have multiple applications and tiers instrumented, and this error seems to be there consistently across all of them. Could someone with experience in the AppDynamics Controller offer help to  resolve this issue.  Any suggestion can provide would be greatly appreciated. Thank you for your time and support! Thank you stevediaz
Hi @tjlavarias24  Did you find the Solution,  I do Have the Same requirement, Can you Help me with your Solution. Thanks in Advance.
Hi @CSReviews , as @marnall said, running a search, yu probably modify the raw data. to export data. the best approach is to run a search without table command, only the main search and then export... See more...
Hi @CSReviews , as @marnall said, running a search, yu probably modify the raw data. to export data. the best approach is to run a search without table command, only the main search and then export data in raw format. There's only one issue: you have to run this search separating events fo index, sourcetype and host, and then import data assigning the correct values, otherwise you cannot assign the correct values to these fields. Ciao. Giuseppe
Hi @mshakeb , if you haven't an Indexer Cluster, you have to: identify all the indexes.conf files that contain the indexes information, stop Splunk, manually modify the conf file $SPLUNK_HOME/et... See more...
Hi @mshakeb , if you haven't an Indexer Cluster, you have to: identify all the indexes.conf files that contain the indexes information, stop Splunk, manually modify the conf file $SPLUNK_HOME/etc/splunk-launch.conf replacing the $SPLUNK_DB value with the new value, check if in the above indexes.conf files there's some location that don't use $SPLUNK_DB, if there's, change the location to the new one, manually move the folders from the old location to the new one, restart Splunk. For more infos, you can see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Moveanindex Ciao. Giuseppe
Hi @user1  I do have the same requirement, Did you find the Solution for it. Can you Please share the details about it. Thanks in Advance.
I tried setting up cluster map, I have 93 country codes to display. However, the map shows only 10 colors and after that it shows only repeated Violet color. Further, I have used "| geostats latfield... See more...
I tried setting up cluster map, I have 93 country codes to display. However, the map shows only 10 colors and after that it shows only repeated Violet color. Further, I have used "| geostats latfield=Lat longfield=Long count by Country" count by country and it show's only 10 countries and for other countries it shows "Others".
I am able to push cloudwatch metrics by selecting streaming and selecting json as datatype of output. Additionally I used inbuilt lambda transformation for cloudwatch metrics. I selected source type ... See more...
I am able to push cloudwatch metrics by selecting streaming and selecting json as datatype of output. Additionally I used inbuilt lambda transformation for cloudwatch metrics. I selected source type aws:firehose:json. The data is seen in Splunk as json data which is not searchable. How to get the data in searchable format?
Mmm, that's odd because I use that technique to manipulate _time - If you could find a simple example of _raw data where that is the case - perhaps by limiting the search just to pick up an event of ... See more...
Mmm, that's odd because I use that technique to manipulate _time - If you could find a simple example of _raw data where that is the case - perhaps by limiting the search just to pick up an event of each type - I'd be really interested to see. If the date format for the 2023 data is not as per the strptime format syntax that would cause a problem as it would be later - that would be my suspicion. If you can do a simple search for that 2023 data and do this  | eval orig_time=strftime(_time, "%F %T.%Q") | eval _time=strptime(...) | table _time orig_time that may show the difference
Got it remediated by including gcusello suggestion of | eval latestDeployed_version=Deployed_Data_time."|".version and used that  field in your stats statement as max value instead of latest. This wo... See more...
Got it remediated by including gcusello suggestion of | eval latestDeployed_version=Deployed_Data_time."|".version and used that  field in your stats statement as max value instead of latest. This worked well and validated to be fine. Thanks a lot to both
Hi  Thanks, it works for the sample data which I have given, but the actual data I pushed in splunk is not as per the Deployed date timestamp, I have pushed old data (2023 year) lately and new data ... See more...
Hi  Thanks, it works for the sample data which I have given, but the actual data I pushed in splunk is not as per the Deployed date timestamp, I have pushed old data (2023 year) lately and new data (2024 ) first. Hence for some columns the results are coming as per the data pushed in to splunk time. Any work around can be applied?
Hi @theprophet01, To get a summary of entities with their info tags you can run the excellent query by sandrosov_splun: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity r... See more...
Hi @theprophet01, To get a summary of entities with their info tags you can run the excellent query by sandrosov_splun: | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text | eval value=spath(value,"{}") | mvexpand value | eval info_fields=spath(value,"informational.fields{}"), alias_fields=spath(value,"identifier.fields{}"), entity_id=spath(value, "_key"), entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name") | appendpipe [| mvexpand alias_fields | eval field_value = spath(value,alias_fields."{}"), field_type="alias" | rename alias_fields as field_name ] | appendpipe [| where isnull(field_type) | mvexpand info_fields | eval field_value = spath(value,info_fields."{}"), field_type="info" | rename info_fields as field_name ] | where isnotnull(field_type) | table entity_id entity_name entity_title field_name field_value field_type This will give you results similar to this: To list the services, you can call the "getservice" custom command that comes with ITSI: | getservice | table title, serviceid, description, service_tags, kpis, service_depends_on, services_depending_on_me, enabled, base_service_template_id, entity_rules, * That gives you these results:   Cheers, Daniel  
@NathanAsh You're right! Then use the strptime() example I mentioned and the latest() function. You don't seem to need _time so just convert Deployed_data_time to _time and you can use latest(version... See more...
@NathanAsh You're right! Then use the strptime() example I mentioned and the latest() function. You don't seem to need _time so just convert Deployed_data_time to _time and you can use latest(version) | makeresults format=csv data="Deployed_Data_time,env,app,version 4/16/2024 15:29,axe1,app1,v-228 4/16/2024 15:29,axe1,app1,v-228 9/15/2023 8:12,axe1,app1,v-131 9/15/2023 8:05,axe2,app1,v-120 9/12/2023 1:19,axe2,app1, v-128 4/16/2024 15:29,axe2,app2,v-628 4/16/2024 15:26,axe2,app2,v-626 9/15/2023 8:12,axe2,app2,v-531 9/15/2023 8:05,axe1,app2,v-530 9/12/2023 1:19,axe1,app2, v-528" | eval _time=strptime(Deployed_Data_time, "%m/%d/%Y %H:%M") | stats latest(version) AS version BY app env | table app,version,env | chart values(version) by app, env limit=0 | fillnull value="Not Deployed"
You can go to Searches, Reports, and Alerts, then set the App to be Splunk Security Essentials. If you set the Owner to All, you can then see all of the included Searches in the app. If one of them i... See more...
You can go to Searches, Reports, and Alerts, then set the App to be Splunk Security Essentials. If you set the Owner to All, you can then see all of the included Searches in the app. If one of them is scheduled, you can set its time range and schedule, so that it will onboard data from long ago in a single swoop. Did you do anything in the app interface to activate the "onboarding background search?"
There is a troubleshooting guide here: https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_resources/ The guide describes how to to write the rawmsg to a file for bo... See more...
There is a troubleshooting guide here: https://splunk.github.io/splunk-connect-for-syslog/main/troubleshooting/troubleshoot_resources/ The guide describes how to to write the rawmsg to a file for both the working server and your non-working windows machine, to see if the messages are received the same. Once you confirm that the logs are being received the same, you can move to seeing why Splunk is not then indexing them.  
Are you sure it is supposed to go to the raw event collector at /services/collector/raw ? Unless I am mistaken, you need the export_raw option to be enabled to export raw data for that endpoint. Try... See more...
Are you sure it is supposed to go to the raw event collector at /services/collector/raw ? Unless I am mistaken, you need the export_raw option to be enabled to export raw data for that endpoint. Try running it with the endpoint set to: https://<host>:8088/services/collector
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How c... See more...
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How can achieve this goal Thank you