All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello there, im creating a visualization using dashboard studio and showing some field with single value. But the data font size displayed is dynamic like the capture.   How to make the data fo... See more...
Hello there, im creating a visualization using dashboard studio and showing some field with single value. But the data font size displayed is dynamic like the capture.   How to make the data fontsize is static in a certain size?. I already add fontSize value but there is no changes: "options": {         "fontSize": 24     }
Hi @Somesh , yes, it's the same. Only one attention point: configure on your Search Heads the search affinity. This is relevant to have more performant searcheas and avoid that a Searc Head uses t... See more...
Hi @Somesh , yes, it's the same. Only one attention point: configure on your Search Heads the search affinity. This is relevant to have more performant searcheas and avoid that a Searc Head uses the other site Indexers, but mainly because otherwise a Search Head, when the primary site is down, continue to search also on the Site1 Indexers so it doesn't find a part of data. I encountered this issue during an acceptance test! Ciao. Giuseppe
    ResourceInfo: { ID: "58", User: "abc", NVM: { a: "522523632", b: "80000000", c: "442523632", d: "14", . . }, RAM: { [+] ... See more...
    ResourceInfo: { ID: "58", User: "abc", NVM: { a: "522523632", b: "80000000", c: "442523632", d: "14", . . }, RAM: { [+] }, ROM: { [+] } }       and for RAM ROM and NVM i want to get the specific data inside them. component   Value a                      522523632 b                      80000000 c                      442523632 d                        14         . . . I want to form a table like this for RAM ROM and NVM. And i do it like this. But sometimes i get a error message like field tmp does not exist, even there is data. So i want to avoid mvzip and get this data in some other way. is there way present to deal with JSON data?     | spath output=RAM ResourceInfo.RAM | rex field=RAM max_match=0 "\"(?<component>[^\"]+)\":(?<Value>[\d\.]+)" | eval tmp = mvzip(component,Value) | mvexpand tmp | eval component=mvindex(split(tmp,","),0) | eval Value=mvindex(split(tmp,","),1) |table component Value    
With Splunk rex you need to double up on backslashes when matching backslashes in the string - try something like this | rex field=raw_msg max_match=0 "(?<=\(|]\\\\;)(?<post>[^:]+):status:(?<status>... See more...
With Splunk rex you need to double up on backslashes when matching backslashes in the string - try something like this | rex field=raw_msg max_match=0 "(?<=\(|]\\\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\]" Having said that, you might want to consider extracting each group of fields as a whole and use mvexpand before separating into post, status, etc. as the multivalue fields you currently have do not align as the null values are not inserted into the mv fields
Found a fix after going through the Microsoft Docs 1. https://learn.microsoft.com/en-gb/azure/storage/common/storage-private-endpoints?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage... See more...
Found a fix after going through the Microsoft Docs 1. https://learn.microsoft.com/en-gb/azure/storage/common/storage-private-endpoints?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json   Answer is hidden in this - dns-changes-for-private-endpoints https://learn.microsoft.com/en-gb/azure/storage/common/storage-private-endpoints?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#dns-changes-for-private-endpoints Once you create a private end point for the storage account. You should be able to resolve the private ip with a DNS Name. For Example Storage Account Name - StorageAccountA Private IP - 10.1.1.2 Storage Account DNS Name would be: StorageAccountA.blob.core.windows.net Now the trick is where ever you are configuring the addon, let say from your onprem server - 1. you should be able to resolve the DNS name to the Private IP from that server. 2. you should have the connectivity to the Private IP on port 443. 3.  access keys to storage account Thats it, you can configure it on the addon and the connections will go through the private end point      
Okay my question was In case if we want to setup deployer for multisite cluster should we follow the same procedure like we did on the Single Cluster
We have a cluster with two search heads and two indexers. We need to install the Enterprise Security app on the search heads. The question arises regarding the summary index and indexes created durin... See more...
We have a cluster with two search heads and two indexers. We need to install the Enterprise Security app on the search heads. The question arises regarding the summary index and indexes created during the Enterprise Security installation, like IOC and notable. Should these indexes be created with the same names on our indexers?
What do you mean by "seems like mvzip command is depricated"? Are you getting an error message? How are you trying to use it? If you don't want to or can't use the mvzip command, a replacement would... See more...
What do you mean by "seems like mvzip command is depricated"? Are you getting an error message? How are you trying to use it? If you don't want to or can't use the mvzip command, a replacement would depend on what it is you are trying to do. Please can you expand on your usecase, with sample events, a description (in non-SPL terms) of what you are trying to achieve, and a representation of your desired output.
Hi PickleRick i was going from 9.0.1 to 9.3.0. cheers,  Dabbsy
Sorry! My bad!!  Manager & the SHC-Deployer each 1 machine.   So you suggest SHC-Deployer is not required for Mulitsite Cluster ?  
Hi @Somesh , I don't like that the Cluster Manager and the SHC-Deployer is on the same server, I'd prefer a dedicated Cluster Manager, but what's the issue? Both Indexers and Search Head Cluster co... See more...
Hi @Somesh , I don't like that the Cluster Manager and the SHC-Deployer is on the same server, I'd prefer a dedicated Cluster Manager, but what's the issue? Both Indexers and Search Head Cluster continue to work also without Cluster Manager and Deployer so your infrastructure continue to work also in case od unavailability of Site1. The real question should be: can my infrastructure manage the log volume and the searches? If yes, you don't have issues. Ciao. Giuseppe
Of course you're using inputs.conf. Without it you'd have no inputs. Question is what inputs you get your data from. Is it a simple tcp:// or udp:// input and you're receiving data directly on your i... See more...
Of course you're using inputs.conf. Without it you'd have no inputs. Question is what inputs you get your data from. Is it a simple tcp:// or udp:// input and you're receiving data directly on your indexer (which you shouldn't)? Is it an intermediate syslog daemon writing to files which are read by UF? Is it something else?
Previously i had setup Single cluster with below requirements. Indexer Cluster with 3 machines. Search head with 3 machines. manager, Monitoring Console & Sh Deployer with 1 machine. Now I need t... See more...
Previously i had setup Single cluster with below requirements. Indexer Cluster with 3 machines. Search head with 3 machines. manager, Monitoring Console & Sh Deployer with 1 machine. Now I need to setup a multisite cluster with below requirements. Site1 Indexers: 3 Search head: 2 Manager: 1 Site 2  Indexers: 3 Search head: 2  
While I try to download the extension manager am unable to access the page or unable to download it. Can you please help me download the appdynamics-extensionmanage? r.zip file for windows 
Hi @Somesh , could you better describe your requirement? SHC-Deployer is a management system that must be configured for a Search Head Cluster in one of the two sites. It doesn't require a seconda... See more...
Hi @Somesh , could you better describe your requirement? SHC-Deployer is a management system that must be configured for a Search Head Cluster in one of the two sites. It doesn't require a secondary copy in the secondary site because the Search Head Cluster continues to work also without the Deployer, the only limitation is that you cannot deploy a new app until the Deployer will be again available. At the same time, you can have one Monitoring Console, you have to configure using the documentation at https://docs.splunk.com/Documentation/Splunk/9.3.1/DMC/DMCoverview You could also create a secondary server in the secondary site, but it isn't required for the activity. Ciao. Giuseppe
I have seen the splunk documentation for setting up Splunk Multisite Cluster but I have not seen anything related to Monitoring Console & SH Deployer. Can some one suggest on how to setup these two ?
Hi, I am having hard time extracting multi value fields present in an event using transforms mv_add=true, it seems to be partially working by just extracting the first and third value present in t... See more...
Hi, I am having hard time extracting multi value fields present in an event using transforms mv_add=true, it seems to be partially working by just extracting the first and third value present in the event but skipping the second and the fourth value. The regex which i am using seems to be perfectly matching for all the values in regex101 but not sure why Splunk is unable to capture it all. Following is the sample event and regex I am using - Event - postreport=test_west_policy\;passed\;(first_post:status:passed:pass_condition[clear]:fail_condition[]:skip_condition[]\;second_post:status:skipped:pass_condition[clear]:fail_condition[]:skip_condition[timed_out]\;third_post:status:failed:pass_condition[]:fail_condition[error]:skip_condition[]\;fourth_post:status:passed:pass_condition[clear]:fail_condition[]:skip_condition[]) Regex - https://regex101.com/r/r66eOz/1  (?<=\(|]\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\] so splunk is just matching all values for first_post and third_post in above event and skipping the second_post & fourth_post.. the same regex i tried with rex command and in that it just matches first_post field values  - |rex field=raw_msg max_match=0 "(?<=\(|]\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\]" Can someone please help me figure if i am missing something here. Thanks.
Hi All, i am using mvzip while working with JSON file. Now in the new Splunk dashboards seems like mvzip command is depricated. Is there any way to extract values from nested JSON apart from mvzip?
Hi @Dabbsy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Is it possible that the token you use is created for a user that does not have permission to list other people's jobs?