All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @JandrevdM , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @sbhatnagar88 , I don't see any issue in this activity, plan it and migrate one server at a time. Ciao. Giuseppe
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and alread... See more...
Hi Folks,   currently we have 4 physical indexers running on CentOS but since CentOS is EOL , plan it to migrate OS from CentOS to Redhat on same physical nodes.  Cluster master is a VM and already running on Redhat. so we will not be touching CM. What should be the approach here and how should we plan this activity ? Any high level steps would be highly recommended.
hot buckets are still being written to, warm buckets are not. Both are usually on fast (expensive) storage.
Thanks! I initially got it right and then tried to think to deep into it. Forgot that if you dedup that splunk will take the latest event.
Answered my own question - the script below worked for me: require([ 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(mvc) { console.log('JavaScript loaded'); // Debug log to confirm scr... See more...
Answered my own question - the script below worked for me: require([ 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function(mvc) { console.log('JavaScript loaded'); // Debug log to confirm script is running var defaultTokenModel = mvc.Components.getInstance('default'); var submittedTokens = mvc.Components.getInstance('submitted'); document.getElementById('submit_button').addEventListener('click', function() { var comment = document.getElementById('html_ta_user_comment').value; console.log('Submit button clicked'); // Debug log to confirm button click console.log('Comment:', comment); // Debug log to show the comment value defaultTokenModel.set('tokComment', comment); console.log('Token set:', defaultTokenModel.get('tokComment')); // Debug log to confirm token is set // Trigger the search by updating the submitted tokens submittedTokens.set(defaultTokenModel.toJSON()); }); }); By adding the line submittedTokens.set(defaultTokenModel.toJSON());, we ensured that the search is refreshed whenever the token value changes.
Thanks @isoutamo. This is very insightful.
Try something like this index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | dedu... See more...
Try something like this index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | dedup object | table change_type object resource_group subscription command _time | sort object asc
I see. So it's really just about data separation. I'm wondering though since you said this: "Warm buckets store recent data on fast, expensive storage to ensure quick access for critical searches,... See more...
I see. So it's really just about data separation. I'm wondering though since you said this: "Warm buckets store recent data on fast, expensive storage to ensure quick access for critical searches, optimizing performance for frequently accessed information." the same thing can be said for hot buckets too, right? I mean after all, hot and warm buckets share the same directory. I don't know. Maybe I just haven't quite fully grasped yet why there can't be warm buckets for other reasons beyond data separation.
Hi @JandrevdM, could you better describe your requirement? using your search you have the last events for your grouping fields. You could add a condition that the last event was before the observa... See more...
Hi @JandrevdM, could you better describe your requirement? using your search you have the last events for your grouping fields. You could add a condition that the last event was before the observation period (e.g. before one day) so you'll have devices that didn't send logs in the last 1 day, is this your requirement? if this is your requirement, you could use something like this: index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | stats latest(_time) AS _time BY command object subscription change_type resource_group | where _time<now()-86400 | table change_type object resource_group subscription command _time | sort object asc Ciao. Giuseppe  
thank you so much, now is clear
Off topic but you could look at the documentation https://docs.splunk.com/Documentation/Splunk/9.0.4/DashStudio/inputConfig#Add_a_submit_button  
Good day, I am trying to find the latest event for my virtual machines to determine if they are still active or decommissioned. The object is the hostname and the command is where I can see if a d... See more...
Good day, I am trying to find the latest event for my virtual machines to determine if they are still active or decommissioned. The object is the hostname and the command is where I can see if a device was deleted or just started. I will then afterwards add the command!="*DELETE" index=db_azure_activity sourcetype=azure:monitor:activity change_type="virtual machine" | rename "identity.authorization.evidence.roleAssignmentScope" as subscription | stats max(_time) as time by command object subscription change_type resource_group | convert ctime(time) ```| dedup object``` | table change_type object resource_group subscription command time | sort object asc
Depending on your knowledge / background of computer systems and processing, this may or may not ring bells with you. unix-based systems (and to some extend windows although unix was there first) use... See more...
Depending on your knowledge / background of computer systems and processing, this may or may not ring bells with you. unix-based systems (and to some extend windows although unix was there first) use a pipe construct to pass the output of one command to the input of the next. SPL does the same thing. The pipe symbol (|) delineates one command from the next and the events that have been produced so far are passed from the output of one command to the next. In your example, there is nothing in the events pipeline before the first pipe, the makeresults command is a generating command which generates events into the pipe line (without a count argument, there is just 1 event as in this case). This event (which just has the _time field), is passed to the eval command, which simply adds three additional fields and passes the event on to the next command (through the event pipeline). The first appendpipe command receives the single event and outputs all the events it receives and adds on any events that are generate from the processing of the events. In this case, it adds another field (total1) to all the events (just one) that it processes and outputs those (it) to the events pipeline. There are now 2 events in the pipeline. The second appendpipe command receives the two events and outputs all the events it receives (the two it received from the previous appendpipe command, and processes the events in the pipeline. The first event that it processes doesn't have a value in total1 (as it is the original event from the makeresults), so the total1+1 is null+1 which is null, test2 is added to the event and output to the events pipeline, The second event is then processed, which does have a value in total1 so total1 is updated in this event 5+1=6.  test2 is added to this event and output to the pipeline. The pipeline now has 4 events in it which are what is displayed in the table. Hopefully that is clearer now?
Thank you for your answer but, i still didn't understand why in the third event field total1=null+1.
Great post! How do I add a submit button so that the text entered into the form gets printed after the Submit button is pressed ? Thanks.
Splunk will do aggregations on the fields you tell it to as long as you have those fields extracted. Until then, they are not fields, they are just some parts of the raw data. You must define proper ... See more...
Splunk will do aggregations on the fields you tell it to as long as you have those fields extracted. Until then, they are not fields, they are just some parts of the raw data. You must define proper ways to extract the fields you want to either aggregate or split your aggregations on. One way is what @yuanliu has already shown. Another way is to define extractions at sourcetype level. Anyway, your data seems a bit "ugly" - it seems to be a json structure with a string field containing some partly-structured data. It would be much better if the data was actually provided in a consistent format so that you don't have to stand on your head in a bucket full of piranhas to get the values you need.
Hi @Alan_Chan , did you analyzed the possibility to use DB-Connect? Otherwise, you should ask this question to a MySQL support or community. Ciao. Giuseppe
1) I'm honestly not sure what will Splunk do if there is a whitespace-filled value. Try and see You're not changing your indexed data after all - you can search it every way you want (if you have ... See more...
1) I'm honestly not sure what will Splunk do if there is a whitespace-filled value. Try and see You're not changing your indexed data after all - you can search it every way you want (if you have too much data to search, just narrow your time range for quick tests). 2) You could try using some streamstats tricks as long as you can make some assumptions on your data (like it being sorted the right way) but that will be bending over backwards. True, eventstats has some limitations when it comes to data set size. You can have however as many eventstats within a single search as you want See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Eventstats#Usage and https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Eventstats#Functions_and_memory_usage  
Hi @Alan_Chan , I don't know Keycloak but surely ingestion is done using scripts that use APIs that aren't acceptable on Splunk Cloud. You have onely one solution: install an on-premise Heavy forwa... See more...
Hi @Alan_Chan , I don't know Keycloak but surely ingestion is done using scripts that use APIs that aren't acceptable on Splunk Cloud. You have onely one solution: install an on-premise Heavy forwarder connected to Splunk Cloud and on it install and configure the Add-On. Ciao. Giuseppe