Hi, I suggest putting "system" at the end of the resource detection list the way you had it originally. Also, in the new processor, please check indentation--it looks like there are extra spaces. ...
See more...
Hi, I suggest putting "system" at the end of the resource detection list the way you had it originally. Also, in the new processor, please check indentation--it looks like there are extra spaces. Next, if everything is working OK, you may not see the "azure_resource_name" appear in the infrastructure navigator until the old MTS(metric time series) ages out (approx 25 hours). You can confirm it's working though by going to Metric Finder and search for "cpu.utilization". This will open a new chart for cpu.utilization--click the "data table" to see the raw data and dimensions. Your azure vm should be listed twice at this point--confirm that one of the listings has a dimension "azure_resource_name" and that it looks correct. If you see it there, you'll just need to wait ~25 hours for the infrastructure navigator to not use the old MTS.
Hi , unfortunately it didnt work . Appeared as follows: ID_SERVICE SERVICE_TYPE TIMESTAMP id_service_value1,servicetype_v...
See more...
Hi , unfortunately it didnt work . Appeared as follows: ID_SERVICE SERVICE_TYPE TIMESTAMP id_service_value1,servicetype_value1 <blank_value> <timestamp_ok> id_service_value2,servicetype_value2 <blank_value> <timestamp_ok> The timestamp was duplicated successfully, the values were broken as expected but they added a 'comma' , colon. and left the next column blank. Additionally, the mvexpand added considerably time for execution, it was performing really fast, and the performance decreased :(. Even that I appreciate your time and response for helping me @richgalloway !
You should put only MN to maintenance mode. That control also indexers. If you have installed splunk correctly and enabled boot start, then it should works as a regular reboot. Of course if you want...
See more...
You should put only MN to maintenance mode. That control also indexers. If you have installed splunk correctly and enabled boot start, then it should works as a regular reboot. Of course if you want you could stop splunk before that. In these case you should use "splunk stop" (e.g. systemctl stop Splunkd) instead of "splunk offline" as you normally use when you want to move primaries to another node.
This is a follow up question though.. I was wondering if we can display the NCAPTest='Yes' or 'No' count along with percentage? And any way to add a percentage symbol(%) on the y axis which has inter...
See more...
This is a follow up question though.. I was wondering if we can display the NCAPTest='Yes' or 'No' count along with percentage? And any way to add a percentage symbol(%) on the y axis which has intervals of 50?
Hi @NightShark , only admin, but I don't like to anable to be admin, even if it's a compliance manager, he/she shouldn't access private objects. Ciao. Giuseppe
Hi AppDynamics team, I'm trying to configure the windows service application with Unhandled exception Error to monitor using .NET agent referring the below link. Configure the .NET Agent for Window...
See more...
Hi AppDynamics team, I'm trying to configure the windows service application with Unhandled exception Error to monitor using .NET agent referring the below link. Configure the .NET Agent for Windows Services and Standalone Applications (appdynamics.com) here is the part of config.xml file which i have added <standalone-applications> <standalone-application executable="D:\sample project\MQ_ConsoleApp1\MQ_ConsoleApp1\bin\x64\Release\MQ_ConsoleApp1.exe"> <tier name="DotNet Tier" /> </standalone-application> and also, I have tried configuring the entry points as well for the windows service. but unable to get the transactions. Please let me know if i missed any more configuration steps. please help me in resolving the issue. Thanks in advance.
With the exception of custom scripted alert actions, which live in $SPLUNK_HOME/bin/scripts, I agree. (My memory is fuzzy here, but I don't think Splunk will run them from an app/bin directory.) This...
See more...
With the exception of custom scripted alert actions, which live in $SPLUNK_HOME/bin/scripts, I agree. (My memory is fuzzy here, but I don't think Splunk will run them from an app/bin directory.) This was all a bit of evening hacking and "Can Splunk do that?" fun. Modular inputs don't have to be Python--WinEventLog isn't--but you do lose the helpfulness of the SDK, and as you say, no Splunk-supported Python interpreter is shipped with the UF. Transforming WinEventLog with SEDCMD or INEGEST_EVAL should be simple, but transforming XmlWinEventLog requires enumerating multiple Data elements; I don't have a clever method for the latter yet. It's a shame DSP was never made widely available, and Edge Processor is still a walled garden. I generally recommend customers use a third-party product for this specific use case.
Hello Splunkers, I m currently implementing a connection from multiple GCP Buket to Splunk enterprise. The Add-on automatically index the datas from those buckets on the _timestamps it get them (So...
See more...
Hello Splunkers, I m currently implementing a connection from multiple GCP Buket to Splunk enterprise. The Add-on automatically index the datas from those buckets on the _timestamps it get them (So if I have a list of transactions from mars to november 2023, that are forwarded today, they will still be index at the same time. However, I would like for some of those datas to be indexed using a timefields present in the data, depending on the apps that use them (For example App 1 has a time fields named "Start_date" and app 2 has another one named "end_date") Unfortunately, i cant think of a way to do it, maybe in the props.conf file, but I'm not sure. Any advices? Thanks
Hi Team,
While running the below search we are not getting license calculation for 2-3 indexes(showing 0) but for other indexes I am able to see the results.
index=_internal source="*license_usag...
See more...
Hi Team,
While running the below search we are not getting license calculation for 2-3 indexes(showing 0) but for other indexes I am able to see the results.
index=_internal source="*license_usage.log" sourcetype=splunkd
| stats sum(b) as Bytes by idx
| eval GB=round(Bytes/1024/1024/1024,3)
| rename h as Host, s as Source, st as Sourcetype, idx as Index, GB as "License Used in GB"
| table Index, "License Used in GB"
I am trying to understand why it is happening for only 2-3 indexes. We have the index data present on both the indexers.
I was trying to configure the forwarder for a while and couldn't succeed, therefore I was watching a video where the person told to make sure to have your status enabled. I thought that the reason th...
See more...
I was trying to configure the forwarder for a while and couldn't succeed, therefore I was watching a video where the person told to make sure to have your status enabled. I thought that the reason that I am not receiving data could be that I might have it as disabled. Then I proceeded to enable everything on the manage apps section. Then I got the message that I needed to restart however the website couldn't automatically restart it by itself and told me to do it through the command line. I've searched for it but couldn't find it. I then decided to restart the pc, afterwards when I opened the website I got the message of " This site can’t be reached 127.0.0.1 refused to connect." I then tried to stop and start the splunkd from the cmd with admin access but it didn't quite fix it either Example of manage apps section (NOT MINE)
Hello @gcusello , I assume that is the case, is there a capability to allow certain roles to view private knowledge objects? Or another easier method to make them do so? Thanks, Regards,