All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try $result.my_sum$
I tested further and its not the modulus calculation, its how splunk handling high numbers. This search shows that if a number (only tested Integer) has 17 digits or more, odd numbers will turned ... See more...
I tested further and its not the modulus calculation, its how splunk handling high numbers. This search shows that if a number (only tested Integer) has 17 digits or more, odd numbers will turned even. The field odd_highest_correct_len_16 in the makeresults search is the highest correct odd i achieved whith working digit for digit from left to right. If the last digit of odd_highest_correct_len_16 is get set to 3, splunk will make a 2 out of that.   | makeresults | fields - _time | eval odd_correct_len_16=1000000000000001, odd_highest_correct_len_16=9007199254740991, odd_incorrect_len_17=10000000000000001    I'm going to file a support case for that.
Hi @Raja_Selvaraj , if you have more than 80 columns, how do you think that you can read 80 columns of values plus 80 columnd of differences from the previous values, it's anyway unreadable! Maybe ... See more...
Hi @Raja_Selvaraj , if you have more than 80 columns, how do you think that you can read 80 columns of values plus 80 columnd of differences from the previous values, it's anyway unreadable! Maybe you should think a different visualization! Anyway, you could use something like this: <your_search> | bin span=1d _time | stats count BY host _time | delta count AS previous_count | delta host AS previous_host | where host=previous_host | eval deltaA=previous_count-count Ciao. Giuseppe
Hi @bapun18 , yes you can, but you should create a documented procedure containing all the steps to do when adding the new node, in particolar: change hostname and IP address in the server, chang... See more...
Hi @bapun18 , yes you can, but you should create a documented procedure containing all the steps to do when adding the new node, in particolar: change hostname and IP address in the server, change hostname in server.conf and in inputs.conf, change encrypted passwords, change the pointings to the clusters members (for SH). Maybe it could be easier having a silent copy of the servers to start if there's a corruption. Even if I don't imagine which kind of corruption you are speaking of. Ciao. Giuseppe
Hi @LizAndy123 , It's always a best practice to name the operations, so use  sum(SizeGB) AS my_sum and then use my_sum in the token. Ciao. Giuseppe
Hi @shabamichae , this is a lab environment so it isn't mandatory to have different servers for the three roles. Also because you need a dedicated DS only if you need to manage more than 50 clients... See more...
Hi @shabamichae , this is a lab environment so it isn't mandatory to have different servers for the three roles. Also because you need a dedicated DS only if you need to manage more than 50 clients and in the lab you have few clients. In addition, in general only the DS requires a dedicated server, License Master and Monitoring Console can share the same machine. Anyway, in the lab, the exercise says to you which solution you have to use. and they are both useful (in the lab!). Ciao. Giuseppe
Hi @Ben , Googleing you can find many best practices for searches, e.g.: https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-search-best-practices.html https://lantern.splunk.com/Spl... See more...
Hi @Ben , Googleing you can find many best practices for searches, e.g.: https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-search-best-practices.html https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Optimizing_search https://community.splunk.com/t5/Splunk-Search/How-to-achieve-Optimized-Search-time-in-Splunk/m-p/29201 https://conf.splunk.com/files/2016/slides/best-practices-and-better-practices-for-users.pdf https://docs.splunk.com/Documentation/Splunk/9.4.0/Search/Quicktipsforoptimization In general, the first rule is limiting the time used in your searches, avoid "All Time" or monts or days. If you need to understand which are the indexes to use, you could use a first search to understand the indexes to use and a search limited to only the interesting indexs. Then, use Fast Mode instead Verbose Mode. If you have to search using as filter only one the index time fields (index, source, sourcetype and host), you can also use | tstats that's faster then a normal search. Ciao. Giuseppe
Hello, As a SOC analyst, what are the best practices for writing SPL queries to quickly find specific data (such as an IP address, a string, or a keyword) across all logs and indexes? I understan... See more...
Hello, As a SOC analyst, what are the best practices for writing SPL queries to quickly find specific data (such as an IP address, a string, or a keyword) across all logs and indexes? I understand that it's generally recommended to narrow down searches and avoid using `index=*`, but sometimes I don't know exactly where the data is indexed (i.e., which index, sourcetype, or field name). Any advice would be greatly appreciated. Thanks in advance!
@shabamichae  I think this is a small deployment, installing the LM and MN on the same instance will work.  This approach is practical for lab environments and smaller deployments, as it reduces res... See more...
@shabamichae  I think this is a small deployment, installing the LM and MN on the same instance will work.  This approach is practical for lab environments and smaller deployments, as it reduces resource overhead.
@att35 This was an expected behavior with the default configurations. To overcome this you have to modify the configurations referring to the below document: https://docs.splunk.com/Documentation/Sp... See more...
@att35 This was an expected behavior with the default configurations. To overcome this you have to modify the configurations referring to the below document: https://docs.splunk.com/Documentation/Splunk/9.3.1/DistSearch/PropagateSHCconfigurationchanges#Choose_a_deployer_push_mode
@att35  Yes, you need to adjust the deployer_push_mode to one of the other parameters, based on your requirements. In general, there are four modes of deployer_push_mode: - full - merge_to_... See more...
@att35  Yes, you need to adjust the deployer_push_mode to one of the other parameters, based on your requirements. In general, there are four modes of deployer_push_mode: - full - merge_to_default - local_only - default_only By default merge_to_default setting is enabled due to you are observing the behavior that you have mentioned. - If set to "full": Bundles all of the app's contents located in default/, local/, users/<app>/, and other app subdirs. It then pushes the bundle to the members. When applying the bundle on a member, the non-local and non-user configurations from the deployer's app folder are copied to the member's app folder, overwriting existing contents. Local and user configurations are merged with the corresponding folders on the member, such that member configuration takes precedence. This option should not be used for built-in apps, as overwriting the member's built-in apps can result in adverse behavior. - If set to "merge_to_default": Merges the local and default folders into the default folder and pushes the merged app to the members. When applying the bundle on a member, the default configuration on the member is overwritten. User configurations are copied and merged with the user folder on the member, such that the existing configuration on the member takes precedence. - * If set to "local_only": This option bundles the app's local directory (and its metadata) and pushes it to the cluster. When applying the bundle to a member, the local configuration from the deployer is merged with the local configuration on the member, such that the member's existing configuration takes precedence. Use this option to push the local configuration of built-in apps, such as search. If used to push an app that relies on non-local content (such as default/ or bin/), these contents must already exist on the member. - If set to "local_only": This option bundles the app's local directory (and its metadata) and pushes it to the cluster. When applying the bundle to a member, the local configuration from the deployer is merged with the local configuration on the member, such that the member's existing configuration takes precedence. Use this option to push the local configuration of built-in apps, such as search. If used to push an app that relies on non-local content (such as default/ or bin/), these contents must already exist on the member. Based on your requirement you can change the deployer_push_mode. It is highly advisable to review the document below to gain a clear understanding of the behavior before implementing any changes. https://docs.splunk.com/Documentation/Splunk/9.3.1/DistSearch/PropagateSHCconfigurationchanges#Choose_a_deployer_push_mode
I am a Splunk Enterprise Certified Admin who has an opportunity to advance to Splunk Architect. Im planning on taking the Splunk Lab. I am preparing for my Splunk architect practical Lab. Please i wa... See more...
I am a Splunk Enterprise Certified Admin who has an opportunity to advance to Splunk Architect. Im planning on taking the Splunk Lab. I am preparing for my Splunk architect practical Lab. Please i want to ask , In the  practical Lab exam, is it acceptable to have only one instance run as The Deployment Server, License Master and the Monitoring Console on the same port  on  the management system  Or I am  expected to run different Three Splunk instances working on different ports (Deployment server, License Master and on the monitoring console) on the Management System   
Yes, you can contact Professional Services Team to use Caesar tool to achieve the goal.
We have a 4 node SHC connected to a deployer.  For a usecase, I created a simple custom app that is just putting handful of dashboards together. Due to ease of use, I create this directly on SHC and... See more...
We have a 4 node SHC connected to a deployer.  For a usecase, I created a simple custom app that is just putting handful of dashboards together. Due to ease of use, I create this directly on SHC and all knowledge objects replicated among the members. During the next bundle push, will deployer delete this app from SHC as it has no knowledge of it? Should I move this app under shcluster/apps folder on the Deployer as well to be safe? Thanks, ~Abhi 
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documenta... See more...
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore https://www.mongodb.com/docs/manual/administration/production-notes/ i hope this help
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documenta... See more...
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore https://www.mongodb.com/docs/manual/administration/production-notes/ i hope this help
Hi, can you try this : index="sample_idx" $serialnumber$ log_level=info | regex message="(?:Unit[\s]+state[\s]+update[\s]+from[\s]+cook[\s]+client[\s]+target)" this try to filter data that contain... See more...
Hi, can you try this : index="sample_idx" $serialnumber$ log_level=info | regex message="(?:Unit[\s]+state[\s]+update[\s]+from[\s]+cook[\s]+client[\s]+target)" this try to filter data that contains the bold text with words separated by one or more space. is that what you are looking for ? i'm sorry if i misunderstand  
Sorry for the delay @livehybrid. We just managed to get access to a customer's Splunk environment this week and it was very productive! Here are our findings: we validated that our Splunk app ... See more...
Sorry for the delay @livehybrid. We just managed to get access to a customer's Splunk environment this week and it was very productive! Here are our findings: we validated that our Splunk app works with Splunk Enterprise Security, on a standalone node we were able to recreate the bug in their clustering setup: we found out that not all of our Python scripts were executable, preventing execution in that context (Can't load script error) the source of the download errors was finally root-caused: Splunk Enterprise Security hijacks the Python module path order. So when we were trying to import our application's bin/utils.py in our own code, it was trying to import /opt/splunk/etc/apps/SA-ThreatIntelligence/bin/utils.py When we overrode sys.path in our script in the customer environment, the application worked again. The simplest work-around is to prefix all of our script files with ipinfo_ to prevent module name collision. We still feel that the Python module path hijacking should not be happening. Not sure if this is a bug that Splunk Platform teams should fix. If I need to file a bug somewhere, let me know! we noticed that we don't have to assign a bearer token to the Splunk admin user for our REST API calls to work with the Python SDK. We can use another user (e.g. ipinfo_admin) with a restricted set of permissions. We are still trying to figure out what the smallest amount of permissions are required for things to work. Next step is applying all of the fixes above and see if it resolves our customers' problems. I'll reach out in this thread if new issues pop up.
So I had help before that after a search I could send a report on a schedule and send a token to a mattermost channel I can send the token and it works, but I am doing a search where one of the fiel... See more...
So I had help before that after a search I could send a report on a schedule and send a token to a mattermost channel I can send the token and it works, but I am doing a search where one of the fields is a sum  Example stats sum(SizeGB) which is getting the sum of a Project ID for a day. What the search is doing is getting the total number of Data uploaded for a Project and the report works great however I was want to send the figure as a token in the alert - I can send the project id but not the sum - I have tried $testresult.sum(SizeGB)$ and also I did an eval of the Sum and called it total_size and tried that as a token and it is just blank.
Try something like this | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | eval HolidayDate=strftime(strptime(HolidayDate,"%Y-%m-%d")+86400,"%Y-%m-%d") | fields Ho... See more...
Try something like this | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | eval HolidayDate=strftime(strptime(HolidayDate,"%Y-%m-%d")+86400,"%Y-%m-%d") | fields HolidayDate ])