All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much for the response and yes it worked.
Try using eval rather than set <eval token="mySource">replace($numSuffix$,"_","")</eval>
If you're not explicitly limiting allowed clients to a predefined list, CNs and SANs in the certs don't matter (as long as the certs are not self-signed which means that CN of the CA is the same as C... See more...
If you're not explicitly limiting allowed clients to a predefined list, CNs and SANs in the certs don't matter (as long as the certs are not self-signed which means that CN of the CA is the same as CN of the issued cert). If you do verify server name (sslVerifyServerName setting) there are additional restrictions that the name in the cert presented by the host must match the hostname you're trying to connect to. But at this point you're not using this. So the first thing to enable is to verify server's cert. For this you need to have CA defined on your UF (preferably by setting sslRootCAPath in your server.conf) containing a PEM-encoded certificate of the CA which issued either the indexer's cert directly or is the rootCA from which the indexer's cert is descended. Then you enable sslVerifyServerCert. If at this point UF cannot connect to the indexer, there's something wrong with the trust relationship between indexer and UF. Check logs. Sometimes it helps to do a tcpdump and see where exactly the connection gets terminated and with what alert. If you manage to get server verification working, time to enable client authentication. You have sslRootCAPath = /opt/splunkforwarder/etc/auth/mycerts/myCertAuthCertificate.pem in your inputs.conf (actually this setting is deprecated and you should use the setting from server.conf; if you don't have a separate different setting there, we might leave it at this moment; If you do - I have no idea how Splunk reacts). That means that you need the client (UF) to present a valid certificate on connection attempt. clentCert = /path/to/your/crypto_material.pem Should be enough on the UF end as long as the key is not encrypted. If it is, you need to set sslPassword. The PEM file must be in the form of client certificate, client private key, certification chain (optionally) all concatenated into a single file. Then on the indexer's end you simply enable requireClientCert. And you're good to go. Again - don't do too many things at once. One step at a time. And remember to have valid certificates (properly issued, not self-signed, not expired and so on).
please help me adapt my current request
Hi, I have a token as below in my dashboard,                   <set token="mySource">replace($numSuffix$,"_","")</set> and I have another token utilising the above,                  <set tok... See more...
Hi, I have a token as below in my dashboard,                   <set token="mySource">replace($numSuffix$,"_","")</set> and I have another token utilising the above,                  <set token="source">$indexg$ connection $mySource"</set> In Query I have,                    <search>                          <query>$source$ | timechart count by host </query>                  </search> Unfortunately this is not working, In splunk section query, its not evaluating. Its reflecting as,                            index=xer connection replace(_45t66,"_","") | timechart count by host I tried with                           <set token="mySource">replace($numSuffix|s$,"_","")</set> But its of no use. Can someone help me with this?  Thanks.
I'm using this built-in lookup to determine the Country for gps coordinates as follows:   | lookup geo_countries latitude, longitude output featureId as Country   The issue is that this lookup do... See more...
I'm using this built-in lookup to determine the Country for gps coordinates as follows:   | lookup geo_countries latitude, longitude output featureId as Country   The issue is that this lookup doesn't anything for some coordinates. Some examples:   40.711157112847644,-74.01527355439009 40.8293703,-73.9709533 22.2866493,114.195508 -33.84808469677436,151.28320075054089 -38.0159081,-57.5320673 | makeresults | eval latitude="40.711157112847644" | eval longitude="-74.01527355439009" | lookup geo_countries latitude, longitude output featureId as Country   Google Maps is capable to find an approx location for above coordinates. Can anybody provide some guidance please. Many Thanks.        
that a log :  2025-01-20 04:38:04.142, HOSTNAME="AEW1052ETLLD2", PROJECTNAME="AQUAVISTA_UAT", JOBNAME="Jx_104_SALES_ORDER_HEADER_FILE", INVOCATIONID="HES", RUNSTARTTIMESTAMP="2025-01-19 20:18:25.0",... See more...
that a log :  2025-01-20 04:38:04.142, HOSTNAME="AEW1052ETLLD2", PROJECTNAME="AQUAVISTA_UAT", JOBNAME="Jx_104_SALES_ORDER_HEADER_FILE", INVOCATIONID="HES", RUNSTARTTIMESTAMP="2025-01-19 20:18:25.0", RUNENDTIMESTAMP="2025-01-19 20:18:29.0", RUNMAJORSTATUS="FIN", RUNMINORSTATUS="FWW", RUNTYPENAME="Run"
OK. Back up a little. Read the descriptions for those functions. In detail. searchmatch() needs a string containing normal search condition(s). That means that you could use it like this: searchmat... See more...
OK. Back up a little. Read the descriptions for those functions. In detail. searchmatch() needs a string containing normal search condition(s). That means that you could use it like this: searchmatch("index=\"*prod*\"") As you can see - you need to escape the inner quotes if your search terms contain them. The match() function expects a regex so you can't use simple wildcards. match(index,".*prod.*") The like() function uses  SQL-like matching so you'd use % as wildcard. like(index,"%prod%")
 
  @PickleRick giving this getting error...
Thank you very much. I followed your suggestion and modified output.cnf as follows, and it worked successfully.   [tcpout] defaultGroup = default-autolb-group [tcpout-server://52.195.142.152:9997]... See more...
Thank you very much. I followed your suggestion and modified output.cnf as follows, and it worked successfully.   [tcpout] defaultGroup = default-autolb-group [tcpout-server://52.195.142.152:9997] [tcpout:default-autolb-group] server = 52.195.142.152:9997 disabled = false sslVerifyServerCert = false useSSL = true     I would like to ask, if I want to use a client certificate, and my server certificate's CN is splunk.xx.net, while my client certificate's CN is uf.xx.net, how should I configure the output and input settings? Additionally, I want both the server and client to mutually verify each other's certificates.Could you give me a  sample.
eval env= if(index="*non_prod*", "Non-Prod", "Prod") This won't work. At least not the way you want it to. Your condition tries to match the index to the literal value of *non_prod*. Since index na... See more...
eval env= if(index="*non_prod*", "Non-Prod", "Prod") This won't work. At least not the way you want it to. Your condition tries to match the index to the literal value of *non_prod*. Since index name cannot contain asterisks this condition will never evaluate to true. You need to use one of the other comparison functions - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ConditionalFunctions Suitable candidates: like() match() searchmatch()  
Hello @kiran_panchavat , Thanks for your reply and confirming that Splunk Enterprise don't require this option. We can launch the Splunk Enterprise console through Search option from AppDynamics so ... See more...
Hello @kiran_panchavat , Thanks for your reply and confirming that Splunk Enterprise don't require this option. We can launch the Splunk Enterprise console through Search option from AppDynamics so the connection works well. Thanks again for your prompt reply. Kudos to you Regards, Selvaganesh E
OK. You can't visualize it like this without additional non-SPL logic (like custom JS in your dashboard). Apparently the colour of the grid cell depends on another factor (job status) which is not c... See more...
OK. You can't visualize it like this without additional non-SPL logic (like custom JS in your dashboard). Apparently the colour of the grid cell depends on another factor (job status) which is not contained within the cell itself. That's one thing. Two other things you're facing (but those can be solved with SPL) are: 1) You need to combine two values - start time and end time - into a single string value. Splunk cannot "merge cells" so you need to have a single value for a single grid cell. That's relatively easy. Just use concatenation on two string fields with a "\n" char to split the line in two or combine two values into multivalued field. 2) This is more tricky - you can "wrap" your data set to single days by means of timechart but you can only have one field to split your timechart by. So you can't do this timechart over both job _and_ country. You'd need to firstly combine both job and country into a single field to categorize your jobs, do a timechart over this field and finally split that field back again into two separate fields.
The general answer to questions like "how to find which hosts send to which indexes" is "you can't do that reliably". There are some things you can do to find info in some specific situations but the... See more...
The general answer to questions like "how to find which hosts send to which indexes" is "you can't do that reliably". There are some things you can do to find info in some specific situations but they will not cover all possible scenarios. 1. As @livehybrid already pointed out, you can try browsing through forwarders' metrics. There are two caveats here: - the metrics are limited to a fixed number of top data points so if your forwarder is sending to a huge number of different indexes you might not see that - events can be rerouted on HFs/indexers to different indexes that they were initially destined for 2. You can simply check the host field. But this is very unreliable technique and only works if you're capturing the events localy with the forwarder and not override the host in any way. 3. You can configure your environment (but this needs to be beforehand) so that forwarders add metadata to events by means of additional indexed fields or - for some types of sources - source field. This might get complicated and difficult to maintain if you don't use orchestration tools. And might have limitations if you're using multi-hop ingestion paths.
All I want to get from the subsearch is to bring back the field actions.  It can probably be a much smaller search.
Hello, We have separate indexes created for non-prod and prod.  Sample index name : sony_app_XXXXXX_non_prod - for non-prod env sony_app_XXXXXX_prod - for prod env XXXXX are Application ID numbe... See more...
Hello, We have separate indexes created for non-prod and prod.  Sample index name : sony_app_XXXXXX_non_prod - for non-prod env sony_app_XXXXXX_prod - for prod env XXXXX are Application ID numbers (different) and we have different indexes as well (along with non-prod and prod). I want a field called env which should pick index details like for all non-prod indexes, the env should be Non-Prod and for Prod indexes, env should be Prod. Given below command  index=sony*  |eval env= if(index="*non_prod*", "Non-Prod", "Prod"). This will not work for Prod because we have different indexes as well which not include either non_prod or prod. but it is giving all values as Prod in env.  Kindly help me with the solution to achieve this.  
Ok. You have two ends of the connection, don't try to fiddle with both of them at the same time. First, configure the receiving end (in your case - the indexer), when you have it working properly, s... See more...
Ok. You have two ends of the connection, don't try to fiddle with both of them at the same time. First, configure the receiving end (in your case - the indexer), when you have it working properly, start configuring the client (the UF). Your inputs.conf on the indexer looks OK. You should now be able to connect with openssl s_client -connect your_indexer:9997 and get a properly negotiated SSL connection (as long as your client trusts your indexer's cert issuer). If you're at this step, you can move forward. If at this step the connection is rejected by the indexer because you're not presenting a cert, there's something wrong with your indexer's configuration. If you have sslVerifyServerCert=false, you should not need any other parameters except useSSL=true because your UF will not be verifying the cert anyway. Remember to always check your configs with btool splunk btool check and splunk btool inputs list --debug splunk btool outputs list --debug
@SelvaganeshEThe add-on you are trying to use has been archived. I recommend checking this add-on for AppDynamics integration. https://splunkbase.splunk.com/app/3471 
@SelvaganeshE The "IP Allow List" feature is specific to Splunk Cloud and is not available in Splunk Enterprise (on-premise) deployments. For integrating Splunk Enterprise with AppDynamics SaaS, you ... See more...
@SelvaganeshE The "IP Allow List" feature is specific to Splunk Cloud and is not available in Splunk Enterprise (on-premise) deployments. For integrating Splunk Enterprise with AppDynamics SaaS, you might need to look into alternative methods for securing and managing access, such as configuring firewall rules or using other network security measures.