All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nieminej  When the UFs are installed, do they come from an image with the UF installed and you initialise it somehow, or is it a vanilla install? It sounds like you are using SCCM to install an... See more...
Hi @nieminej  When the UFs are installed, do they come from an image with the UF installed and you initialise it somehow, or is it a vanilla install? It sounds like you are using SCCM to install an app which the DS think it controls? If it was me, I'd have a bare-bones Deploymentclient app with low precedence (e.g. z_myorg_deployclient) which has your deploymentclient.conf - deploy this using SCCM and then when it connects to the DS it should pull down the base_uf app - this has a higher precedence that z_myorg_deployclient so the deploymentclient.conf here will take over, allowing you to make updates in the future if needed.  I would definitely avoid having an app controlled by DS *and* pre-installed on the UF. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will 
  Hi Team, I am reaching out to discuss a requirement we have regarding the monitoring of an application deployed on MS Dynamics 365. Specifically, we need to monitor two servers: one is a front-e... See more...
  Hi Team, I am reaching out to discuss a requirement we have regarding the monitoring of an application deployed on MS Dynamics 365. Specifically, we need to monitor two servers: one is a front-end server running on IIS, and the other is a back-end server with some application services running, but without any specific technology stack. Could you please help me understand whether monitoring Dynamics 365 via AppDynamics is supported? Additionally, I would appreciate any guidance on how to initiate the monitoring process using AppDynamics SaaS. Regards, Vinodh
We have the same issue after scanning on Version: 9.4.0 How can we fix it?  Thank you
@ITWhisperer  i tried the query not getting the output | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.... See more...
@ITWhisperer  i tried the query not getting the output | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1") | fields - _time | foreach hostname1 mode=multivalue [| eval diff=if(mvfind(hostnames2,<<ITEM>>)>=0,diff,mvappend(diff,<<ITEM>>))] | table APP1,hostname1,hostname2,diff what i need in the diff column is egfhost.domain1      
| foreach hostname1 mode=multivalue [| eval diff=if(mvfind(hostnames2,<<ITEM>>)>=0,diff,mvappend(diff,<<ITEM>>))]
Please share what you have tried.
Yes, that might be an option, but even if it works once, there is the risk that it will still hit memory problems the next time.
Our universal forwarders and Indexers are installed to the latest version, I have also done the edits to the conf file by adding the stanza but that didn't work either. I'm starting to think that thi... See more...
Our universal forwarders and Indexers are installed to the latest version, I have also done the edits to the conf file by adding the stanza but that didn't work either. I'm starting to think that this is an error with the 9.4.0 or 9.4.1 update.
i have a list of hostnames being generated from left join for different application in multivalue table column APP1 hostname1 hostnames2 appdelta syzhost.domain1 abchost.domain1 egfhost.... See more...
i have a list of hostnames being generated from left join for different application in multivalue table column APP1 hostname1 hostnames2 appdelta syzhost.domain1 abchost.domain1 egfhost.domain1 syzhost.domain1 abchost.domain1       what i need is a column with  just egfhostdomain1 in a separete column just showing the diff of the list
It was about ~20 of 100 indexes that where not fully searchable. We have fixed the hardware issues on node2 so everything is repairing now.
Hi @MichaelM1  Just to check the typo of FROMAT in your transforms.conf setting in the post - is that just in the post and not a typo in your config file? Also, in your transforms.conf you will nee... See more...
Hi @MichaelM1  Just to check the typo of FROMAT in your transforms.conf setting in the post - is that just in the post and not a typo in your config file? Also, in your transforms.conf you will need to use SOURCE_KEY=_meta as by default it is _raw, and if a previous GUIDe/ProjectID is set then it should be in _meta. Im not able to test this at the moment, but let me know how you get on changing the SOURCE_KEY and I will try and test it out properly later! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I tried one, it did work but I can see in backend it still executed the query with 7MB query load
Can you share an example 
Hi @parumugam  Just to confirm - are you seeing data when you do this? A = data('k8s.container.cpu_limit').publish(label='A') If you click on "Data Table" tab, do you see all of the fields your do... See more...
Hi @parumugam  Just to confirm - are you seeing data when you do this? A = data('k8s.container.cpu_limit').publish(label='A') If you click on "Data Table" tab, do you see all of the fields your doing the sum BY of?   How are you trying to display this? Try your query with the Table view to see if this works. One last thing - I presume you have published the values (e.g. with .publish()) ?  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I have a configuration where I have an intermediate forward that is forwarding logs to central indexer that I do not control.  In order to send logs to the indexer they MUST be tagged with a Project_... See more...
I have a configuration where I have an intermediate forward that is forwarding logs to central indexer that I do not control.  In order to send logs to the indexer they MUST be tagged with a Project_ID and GUIDe in the _META tag otherwise the logs will get rejected.  I would like to set up a multi-tenant like configuration where the intermediate forwarder is used by several projects that have different Project_ID and GUIDe.  The universal forward configuration that I am sending to the clients have the tagging and will be sending their unique project and guide numbers.  The problem that I am having is that the intermediate forwarder (windows) itself is not tagging its own logs when it sends to the indexers.  The clients tag and send their logs just fine.   If I apply the same tagging as the clients on the on the intermediate forwarder it will tag the logs twice or overwrite the tags. What I want to do is tag only the intermediate forward logs or any logs that are not already tagged. This is what I tried, and it is not working.  I was attempting to use regex to only add the tags to logs that are not already tagged using this filter ^(?.*Project_ID::) but this is not working.  Any help is appreciated. Intermediate forwarder: etc/apps/projtransforms/local/props.conf [default] TRANSFORMS-projectid = addprojectid TRANSFORMS-IntermediateForwarder = addIntermediateForwarder TRANSFORMS-GUIDe = addGUIDe etc/apps/projtransforms/transforms.conf [addprojectid] REGEX = ^(?.*Project_ID::) FROMAT = Project_ID::123456 MV_ADD = true [addGUIDe] REGEX = ^(?.*GUIDe::) FROMAT = GUIDe::654321 MV_ADD = true [addIntermediateForwarder] REGEX = .* FORMAT = IntermediateForwarder::XXXXXX MV_ADD = false UF Client tagging /etc/system/local/inputs.conf [default] _meta = GUIDe::654321 Project_ID::123456 disabled = 0 [WinEventLog] _meta = GUIDe::654321 Project_ID::123456 disabled = 0 [perfmon] _meta = GUIDe::654321 Project_ID::123456 disabled = 0 index = spl_win  
Neat trick. But it moves processing to SH. I'd go for extracting spans, clearing other fields, possibly including _raw (to conserve memory), and going for mvexpand  on spans.
Hi Chris,   I know this is years later but I'm curious if you found a solution for this? I don't believe there's an app related to this yet?
@Gryphus  Since only 1 is available, it's not fully searchable, meaning the search factor is not met. However, all data should still be searchable, as there is at least one searchable copy. With one... See more...
@Gryphus  Since only 1 is available, it's not fully searchable, meaning the search factor is not met. However, all data should still be searchable, as there is at least one searchable copy. With one indexer down, the search factor of 2 isn't met, as only one searchable copy is available. This makes indexes not fully searchable, but searches should still work with the up indexer's data. In your case, you have a 2-node cluster, and based on the details, both the replication factor and search factor appear to be set to 2. This means:   Each bucket (the basic unit of index storage) should have a primary copy on one indexer and a replica on the other. Both copies are designated as searchable to meet the search factor of 2. https://community.splunk.com/t5/Deployment-Architecture/Is-it-possible-that-Search-Factor-is-Not-Met-and-All-Data-is/m-p/500305 
Hi @ej87897  The architecture behind the Deployment Server within Splunk changed in version 9.2 and now the data on connections from clients (and which apps theyve downloaded) is stored in indexes p... See more...
Hi @ej87897  The architecture behind the Deployment Server within Splunk changed in version 9.2 and now the data on connections from clients (and which apps theyve downloaded) is stored in indexes prefixed _ds - The panels that display the clients under the Forwarder Management page rely on this information, if you have your DS configured to send all its data to an indexer tier and have not configured the selective forwarding then it will "appear" like nothing is working - when infact the clients will still be connecting and being managed by the DS as they should be.  To fix this you need to apply a selective forwarding tweak to your outputs.conf - check out https://docs.splunk.com/Documentation/Splunk/9.4.1/Updating/Upgradepre-9.2deploymentservers Essentially you need to configure outputs.conf as follows: [indexAndForward] index = true selectiveIndexing = true Also - have you upgraded your indexers to at least 9.2? If not these wont have the required indexes configured on them to receive the data.  Ensure your indexers have the following indexes: [_dsphonehome] [_dsclient] [_dsappevent] There may be other nuances depending on your architecture (such as sending via an intermediary forwarder) so check out the docs https://docs.splunk.com/Documentation/Splunk/9.4.1/Updating/Upgradepre-9.2deploymentservers page for more information Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@ej87897  Since the forwarders are still sending data and appear in the Monitoring Console, they’re clearly functional and communicating with the Splunk infrastructure. The problem seems specific to... See more...
@ej87897  Since the forwarders are still sending data and appear in the Monitoring Console, they’re clearly functional and communicating with the Splunk infrastructure. The problem seems specific to the Deployment Server (DS) and its Forwarder Management UI.   Upgrade pre-9.2 deployment servers   This problem can occur in Splunk Enterprise 9.2 or higher if your deployment server forwards its internal logs to a standalone indexer or to the peer nodes of an indexer cluster. This issue can occur after an upgrade or in a new installation of 9.2 or higher. To rectify, add these settings to outputs.conf on the deployment server: [indexAndForward] index = true selectiveIndexing = true If you add these settings post-upgrade or post-installation, you might need to restart the deployment server. You can see below URL: https://docs.splunk.com/Documentation/Splunk/9.4.1/Updating/Upgradepre-9.2deploymentservers