All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is no documented way to do that.  Splunk recommends engaging Professional Services for that situation.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratenon-clusteredindexe... See more...
There is no documented way to do that.  Splunk recommends engaging Professional Services for that situation.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment#Is_there_any_way_to_migrate_my_legacy_data.3F It's not as simple as copying data from one indexer to another because care must be taken to ensure bucket IDs are not duplicated.
Hey Rick, thanks for responding! I saw that page, but unfortunately it doesn't specifically mention costs or data transfer limitations...  say they're restoring data daily (edge case I know), but do... See more...
Hey Rick, thanks for responding! I saw that page, but unfortunately it doesn't specifically mention costs or data transfer limitations...  say they're restoring data daily (edge case I know), but do they only EVER pay for the 500GB block or will they be surprised by transfer costs if they utilize the feature too much? *MY* answer is "data transfer costs are likely built into the cost model"  but they want a specific answer.
Hi @Ajith.Kumar, Are you familiar with End User Monitoring? https://docs.appdynamics.com/appd/21.x/21.5/en/end-user-monitoring
Try below. It will start the bin from Saturday. | bin span=1w@w6 _time   For Monday it will be | bin span=1w@w1 _time  
Thanks! I initially tried that call and  wasn't working for me but I ended up realizing it was because I was adding my fields to the "Custom fields" and not the "CEF" settings. After speaking with t... See more...
Thanks! I initially tried that call and  wasn't working for me but I ended up realizing it was because I was adding my fields to the "Custom fields" and not the "CEF" settings. After speaking with the Splunk team it sounds like Custom fields are meant for reference within SOAR, like adding some information into the HUD, whereas CEF is what's actually used to access the artifact data. I appreciate your reply
Try this: phantom.collect2(container=container, datapath=["artifact:*.cef.FIELD_NAME"])
Hi @Zoltan.Gutleber, Thanks so much for following up with the solution. We really like to see members sharing new discoveries and insights with the community!
It doesn't work - it complains about  indentation
Hi @Jack90, sorry I didn't realize you were talking about Splunk Cloud! Forget Indexers! Ciao. Giuseppe
I am getting below error from Splunkd. How to fix this root cause error. Please suggest some workaround.    
Thank you so much for your answer. Could you kindly please precise what do you mean by setting roles at indexers at Splunk Cloud?
Hi @Viveklearner , please see my approach and adapt it to your data <your_search> | eval Status=case(status>=200 AND status<400,"Success",status>=400 AND status<500,"Exception",status>=500,"Failure... See more...
Hi @Viveklearner , please see my approach and adapt it to your data <your_search> | eval Status=case(status>=200 AND status<400,"Success",status>=400 AND status<500,"Exception",status>=500,"Failure",status) | stats count BY Status Ciao. Giuseppe
We have range of statua from 200 to 600. Want to search logs and create a output in below sample for range as 200 to 400 as success, 401 to 500 as exception, 501 to 500 as failure: Sucess - 100 E... See more...
We have range of statua from 200 to 600. Want to search logs and create a output in below sample for range as 200 to 400 as success, 401 to 500 as exception, 501 to 500 as failure: Sucess - 100 Exceptio - 44 Failure - 3 I am able to get above format data but getting duplicate rows for each category e.g. Success - 10 Success - 40 Sucess - 50 Exception - 20 Exception - 24 Failure - 1 Failure -2 Query  Ns=abc app_name= xyz | stats count by status | eval status=if(status>=200 and status<400,"Success",status) | eval status=if(status>=400 and status<500,"Exception",status) | eval status=if(status>=500,"Failure",status) Kindly help.
Hi @krutika_ag , if these splunk servers are sending internal logs to Splunk you could use something like this: for Windows servers: index=_internal | rex field=source "^(?<splunk_home>.*)Splunk" ... See more...
Hi @krutika_ag , if these splunk servers are sending internal logs to Splunk you could use something like this: for Windows servers: index=_internal | rex field=source "^(?<splunk_home>.*)Splunk" | dedup host | table host splunk_home for linux servers: index=_internal | rex field=source "^(?<splunk_home>.*)splunk" | dedup host | table host splunk_home Ciao. Giuseppe
Hi @anooshac, my sample is a sample without any logic except the one you described. So the order of values isn't relevant and can also be different. If you have many values, I hint to use a lookup... See more...
Hi @anooshac, my sample is a sample without any logic except the one you described. So the order of values isn't relevant and can also be different. If you have many values, I hint to use a lookup. Ciao. Giuseppe
Hello David in General, AppDynamics Different components uses MySQL Databases , not centralized but with each component (each controller has it's MySQL database, enterprise console has its database,... See more...
Hello David in General, AppDynamics Different components uses MySQL Databases , not centralized but with each component (each controller has it's MySQL database, enterprise console has its database, EUM too, etc.)  check the bundled Database Controller Component Versions (appdynamics.com) you don't have to be super MySQL admin, but mandatory to write down the admin user and password as support will need it for any support issues, also you will find here how to backup and restore the bundled Database Controller Data Backup and Restore (appdynamics.com) Database agent is the agent that monitor different Databases. that you application work with. remotely connected to these Databases through JDBC connection, and run queries to get the performance data about each , you can find here the supportability matrix Database Visibility Supported Environments (appdynamics.com) BR
Hi @Jack90, answering to your questions: 1) roles aren't distributed between Splunk servers and you have to manually populate them. Anyway, remember that it's mandatory to create roles on Search ... See more...
Hi @Jack90, answering to your questions: 1) roles aren't distributed between Splunk servers and you have to manually populate them. Anyway, remember that it's mandatory to create roles on Search Heads and Indexers, not on the other servers. 2) I didn't see best practices for roles creations, I give you only one hint:avoid to use hineritance, because you could have features and grants that you could not want. 3) you can create roles using GUI or conf files, it's the same thing: i prefer GUI to avoid syntax errors. you can find more details at https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/UseaccesscontroltosecureSplunkdata and https://lantern.splunk.com/Splunk_Success_Framework/People_Management/Setting_roles_and_responsibilities  Ciao. Giuseppe
The link provided is to this question, not to any documentation. If the TA is already installed on the indexers then you have what you need.  Just install the same TA on the forwarders.
i have setup splunk on my local and now trying to connect to it via java code what i see is Service.connect() step passes   but further when i try to create the search job  jobs.create(mySearch);... See more...
i have setup splunk on my local and now trying to connect to it via java code what i see is Service.connect() step passes   but further when i try to create the search job  jobs.create(mySearch); import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.UnsupportedEncodingException; import com.splunk.Application; import com.splunk.Service; import com.splunk.ServiceArgs; import com.splunk.*; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; /** * Log in using an authentication token */ public class SplunkTest { static Service service = null; /** * Authentication Token. * Actual token length would be longer than this token length. * @throws IOException * @throws ` */ //static String token = "1k_Ostpl6NBe4iVQ5d6I3Ohla_U5"; public static void main(String args[]) throws InterruptedException, IOException { HttpService.setSslSecurityProtocol( SSLSecurityProtocol.TLSv1_2 ); String token = "REDACTED"; ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setPort(8089); loginArgs.setHost("localhost"); loginArgs.setScheme("https"); loginArgs.setToken(String.format("Bearer %s", token)); // Initialize the SDK client service = Service.connect(loginArgs); System.out.println(service.getHost()); System.out.println("connected successfully"); JobCollection jobs = service.getJobs(); //System.out.println("There are " + jobs.size() + " jobs available to 'admin'\n"); // Create a simple search job String mySearch = "search * | head 5"; // Retrieves the collection of search jobs //JobCollection jobs = service.getJobs(); // Creates a search job Job job1 = jobs.create(mySearch);
Here are my findings from a case I opened on this issue a while back. This fixed it for me. Splunk verifies the TLS certificates using SHA-1 cryptography. The default policy on the Linux server need... See more...
Here are my findings from a case I opened on this issue a while back. This fixed it for me. Splunk verifies the TLS certificates using SHA-1 cryptography. The default policy on the Linux server needed to be updated to SHA-1. update-crypto-policies --set DEFAULT:SHA1 https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening