All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Adding to what @yeahnah and @isoutamo already said (yes, the zero-ingest license seems to be the way to go; otherwise you'll have a huge headache trying to make your all data available for a single S... See more...
Adding to what @yeahnah and @isoutamo already said (yes, the zero-ingest license seems to be the way to go; otherwise you'll have a huge headache trying to make your all data available for a single Splunk instance (to be honest while it is not explicitly stated anywhere I wouldn't be surprised if Splunk Free didn't support SmartStore; after all it's a relatively enterprise-level functionality) be very cautious not to get your environment to a state in which you have an expired license since then you must get an unlock license from the sales team. It's not enough to just upload a renewed license - once it's locked you have to unlock it "manually".
OK. A few things here and there. 1. The format for the cert file (for inputs and generally for all splunkd-related activity except the webui (which can be a bit confusing sometimes) is: <subject ce... See more...
OK. A few things here and there. 1. The format for the cert file (for inputs and generally for all splunkd-related activity except the webui (which can be a bit confusing sometimes) is: <subject cert (i.e. your forwarder or splunk server> <private key> <CA chain (if needed)> (all of them PEM-encoded) 2. If you don't want to authenticate forwarder with a cert there's no point of generating one for it. 3. The "SSL23_GET_CLIENT_HELLO:unknown protocol" message is a fairly generic one. Check the indexer's logs for anything regarding connection from the forwarder's IP. This should tell you more.  
OK. Back up a little. Where does this data come from? You seem to have multiple multivalued fields. That might be a problem because with Splunk there is no implied relationship between those fields ... See more...
OK. Back up a little. Where does this data come from? You seem to have multiple multivalued fields. That might be a problem because with Splunk there is no implied relationship between those fields whatsoever so values in one multivalued field do not have to be connected with values in another multivalued field. And their order doesn't need to match the order in another multivalued field. Take this for example: | makeresults format=csv data="a,b,c a,,c ,b,c a,b" It will give you this: a b c a   c   b c a b   But if you try to "squeeze" it into multivalued fields by doing | stats list(*) as * You'll get a b c a a b b c c These don't match the "layout" of the input data for the stats command. So be extremely cautious when handling multivalued fields because you might get completely different values from what you expect.
This is the SPL Magic i was missing Now i can have a basic understanding which indexes might be searched less frequently than others
Already did, unhelpful support - they literally told me to try and change wifi network. haven't had this kind of support for years.
<condition match="$Services$ == &quot;vpc&quot;"> </condition> <!--eval token="VPC_details">if(match($Services$, "vpc"), "true", "false")</eval> <set token="VPC_details"></set> <unset token="S3_deta... See more...
<condition match="$Services$ == &quot;vpc&quot;"> </condition> <!--eval token="VPC_details">if(match($Services$, "vpc"), "true", "false")</eval> <set token="VPC_details"></set> <unset token="S3_details"></unset> <unset token="EC2_details"></unset--> <condition> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;VPC_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> I tried this but it doesn't work
Hi @poojak2579 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Vfsglobal , I'm not sure that's possible to install Splunk on iPhone, but to install on MacOS you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/I... See more...
Hi @Vfsglobal , I'm not sure that's possible to install Splunk on iPhone, but to install on MacOS you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/InstallonMacOS Ciao. Giuseppe
Am new here I want to install and use slpunk on my iPhone and Mac, how do I install where do I start. 
I am facing the same issue.
Hello, Getting Action forbidden error when going to "https://<hostname>/en-US/app/search/analytics_workspace" on Splunk Cloud. Please note: I have logged with sc_admin role. Thanks
@vnravikumar  thanks for that  and it helped a lot, but I have multiple fields .  In that case for STATIC filters its working & coming to dynamic its reflecting the Field for Value name in the filte... See more...
@vnravikumar  thanks for that  and it helped a lot, but I have multiple fields .  In that case for STATIC filters its working & coming to dynamic its reflecting the Field for Value name in the filter of linked dashboard. Thanks in Advance.....  
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: ... See more...
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: I investigated the queue issue in different pipelines. This link explains in detail how to identify and fix queue problems to reduce delays: index=_internal host=* blocked=true This way, you can check whether the issue is with the universal forwarder, the heavy forwarder, or a higher tier. I experienced this issue with both UF and HF. I increased the queue size and added the following parameter along with the queue adjustment: /etc/system/local/server.conf parallelIngestionPipelines=2 https://conf.splunk.com/files/2019/slides/FN1570.pdf To adjust the max speed rate in the ingestion pipeline, I modified the following parameter in limits.conf: [thruput] maxKBps = 0 https://community.splunk.com/t5/Getting-Data-In/How-can-we-improve-universal-forwarder-performance/m-p/276195 The final and most effective step was changing the following parameter in UF’s inputs.conf: use_old_eventlog_api=true If you have added the parameter evt_resolve_ad_obj=true to translate SID/GUID and it cannot perform the translation, it will pass the task to the next domain controller. It waits for a response before proceeding, which can cause delays. To fix this, I added: evt_dc_name=localhost By implementing the above steps, logs were successfully received and indexed in real-time. Thank you for taking the time to read this. I hope it helps you resolve similar issues.
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: ... See more...
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: I investigated the queue issue in different pipelines. This link explains in detail how to identify and fix queue problems to reduce delays: index=_internal host=* blocked=true This way, you can check whether the issue is with the universal forwarder, the heavy forwarder, or a higher tier. I experienced this issue with both UF and HF. I increased the queue size and added the following parameter along with the queue adjustment: /etc/system/local/server.conf parallelIngestionPipelines=2 https://conf.splunk.com/files/2019/slides/FN1570.pdf To adjust the max speed rate in the ingestion pipeline, I modified the following parameter in limits.conf: [thruput] maxKBps = 0 https://community.splunk.com/t5/Getting-Data-In/How-can-we-improve-universal-forwarder-performance/m-p/276195 The final and most effective step was changing the following parameter in UF’s inputs.conf: use_old_eventlog_api=true If you have added the parameter evt_resolve_ad_obj=true to translate SID/GUID and it cannot perform the translation, it will pass the task to the next domain controller. It waits for a response before proceeding, which can cause delays. To fix this, I added: evt_dc_name=localhost By implementing the above steps, logs were successfully received and indexed in real-time. Thank you for taking the time to read this. I hope it helps you resolve similar issues.
Hi @inventsekar, 1) As I recall, I only generated the lookup CSV file for testing in Tamil. An all-language lookup might be size prohibitive. The best SPL-based workaround to count all graphemes see... See more...
Hi @inventsekar, 1) As I recall, I only generated the lookup CSV file for testing in Tamil. An all-language lookup might be size prohibitive. The best SPL-based workaround to count all graphemes seems to be an eval expression using the \X regular expression token to match Unicode sequences. The simplest expression was: | eval count=len(replace(field, "\\X", "x")) 2) The external lookup allowed programmatic access to Python modules or any other library/program if called from an arbitrary script. The example returned a Unicode character category, but the subsequent counting solution wasn't comprehensive. In Bash, calculating the number of characters may be as simple as: echo ${#field} => 58 but this suffers the same problem as our earlier efforts by not taking into account marks and code sequences used to generate graphemes. Is Perl better? perl -CS -lnE 'say length' <<<${field} => 58 As before, the length is incorrect. I'm not a Perl expert, but see https://perldoc.perl.org/perlunicode: The only time that Perl considers a sequence of individual code points as a single logical character is in the \X construct .... That leads us to: perl -CS -lnE 's/\X/x/g; say length' <<<${field} => 37 There may be a better native Perl, Python, etc. solution, but calling an external program is more expensive than the equivalent SPL. 3) If you only need to count graphemes, I would use the eval command. What other use cases did you have in mind?
I don't quite follow your logic, but your solution will probably require mv eval functions and/or foreach. e.g. you can find the Category index into your LookupCategory something like this | eval c... See more...
I don't quite follow your logic, but your solution will probably require mv eval functions and/or foreach. e.g. you can find the Category index into your LookupCategory something like this | eval c=0 | foreach mode=multivalue LookupCategory [ eval mv_match=case(Category=<<ITEM>>, c, Category><<ITEM>>, -c, true(), mv_match), c=c+1 ] i.e a positive result of mv_match means the MV index of an exact match (offsets from 0). A negative mv_match result indicates the last LookupCategory that Category was > than and an empty result means Category was never greater than any LookupCategory. Then with that knowledge you can mvindex() the other MV values based on your needs, e.g. abs(mv_match)  
Look at your condition match statements, e.g. your first one does <condition match="$row.Services$ != &quot;s3-bucket&quot;"> and then it sets S3_details="true" same for VPC and the other. So you ... See more...
Look at your condition match statements, e.g. your first one does <condition match="$row.Services$ != &quot;s3-bucket&quot;"> and then it sets S3_details="true" same for VPC and the other. So you probably want to change your matches to = rather than !=
I've set up a dev 9.2 Splunk environment. And I'm trying to use a self-signed cert to secure forwarding. But every time I attempt to connect the UF to the Indexing server it fails -_- I've tried ... See more...
I've set up a dev 9.2 Splunk environment. And I'm trying to use a self-signed cert to secure forwarding. But every time I attempt to connect the UF to the Indexing server it fails -_- I've tried a lot of permutations of the below. All ultimately ending with the forwarder unable to connect to the indexing server. I've made sure permissions are set to 6000 for cert and key. Made sure the Forwarder and Indexer have seperate common names. And created multiple cert types. But I'm at a bit of a loss as to what I need to do to get the forwarder and indexer to connect over a self signed certificate. Any help is incredibly appreciated. Below is some of what I've attempted. Trying to not make this post multiple pages long X) Simple TLS Configuration Generating Indexer Certs: openssl genrsa -out indexer.key 2048 openssl req -new -x509 -key indexer.key -out indexer.pem -days 1095 -sha256 cat indexer.pem indexer.key > indexer_combined.pem Note: I keep reading that the cert and key need to be 1 file. But I"m not sure on this. Generating Forwarder Certs: openssl genrsa -out forwarder.key 2048 openssl req -new -x509 -key forwarder.key -out forwarder.pem -days 1095 -sha256 cat forwarder.pem forwarder.key > forwarder_combined.pem Indexer Configuration: [SSL] serverCert = /opt/tls/indexer_combined.pem sslPassword = random_string requireClientCert = false [splunktcp-ssl:9997] compressed = true Outcome: Indexer listens on port 9997 for encrypted communications. Forwarder Configuration [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] server = 192.168.110.178:9997 compressed = true [tcpout-server://192.168.110.178:9997] sslCertPath =/opt/tls/forwarder_combined.pem sslPassword = random_string sslVerifyServerCert = false Outcome: Forwarder fails to communicate with Indexer Logs from Forwarder: ERROR TcpInputProc [27440 FwdDataReceiverThread] - Error encountered for connection from src=192.168.110.26:33522. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol Testing with openssl s_client: Command: openssl s_client -connect 192.168.110.178:9997 -cert forwarder_combined.pem -key forwarder.key Output: Unknown CA ( I didn't write the exact message in my notes, but it generally says the CA is unknown.) Note: Not sure if I need to add sslVersions = tls1.2, but that seems outside of the scope of the issue. Troubleshooting connect, running openssl s_client raw: Command: openssl s_client -connect 192.168.110.178:9997 Output received: CONNECTED(00000003) Can't use SSL_get_servername Full s_client message is here: https://pastebin.com/z9gt7bhz Further Troubleshooting Added Indexers self-signed certificate to forwarder ... sslPassword = random_string sslVerifyServerCert = true sslRootCAPath = /opt/tls/indexer_combined.pem Outcome: same error message. Testing with s_client: Command: openssl s_client -connect 192.168.110.178:9997 -CAfile indexer_combined.pem Connecting to 192.168.110.178 CONNECTED(00000003) Can't use SSL_get_servername Full s_client message is here: https://pastebin.com/BcDvJ2Fs
You can generally get there with SPL, e.g. here's a bit of a hack, which has a stab at it based on your data example | makeresults format=csv data="title,totalEventCount,frozenTimePeriodInSecs,count... See more...
You can generally get there with SPL, e.g. here's a bit of a hack, which has a stab at it based on your data example | makeresults format=csv data="title,totalEventCount,frozenTimePeriodInSecs,count,usedData _audit,771404957,188697600, , _configtracker,717,2592000, , _dsappevent,240,5184000, , _dsclient,232,5184000, , _dsphonehome,843820,604800, , _internal,7039169453,15552000, , _introspection,39100728,1209600, , _telemetry,55990,63072000, , _thefishbucket,0,2419200, , , , ,22309,_* , , ,1039,_audit , , ,2,_configtracker , , ,1340,_dsappevent , , ,1017,_dsclient , , ,1,_dsclient] , , ,709,_dsphonehome , , ,2089,_internal , , ,117,_introspection , , ,2,_metrics , , ,2,_metrics_rollup , , ,2,_telemetry , , ,2,_thefishbucket" | eval title=coalesce(title, usedData) | fields - usedData | stats values(*) as * by title | eventstats values(eval(if(match(title, "\*"), title."##".title."##".count, null()))) as wildcard_indexes | eval wildcard_indexes=mvmap(wildcard_indexes, replace(wildcard_indexes, "\*(.*##)?", ".*\1")) | eval count=count+sum(mvmap(wildcard_indexes, if(match(title, mvindex(split(wildcard_indexes, "##"), 0)) AND title!=mvindex(split(wildcard_indexes, "##"), 1), mvindex(split(wildcard_indexes, "##"), 2), 0))) | fields - wildcard_indexes
Free license didn’t contains cluster or distributed capabilities. So you cannot use free for this. Basically you maybe could convert multi site environment back to single instance environment before y... See more...
Free license didn’t contains cluster or distributed capabilities. So you cannot use free for this. Basically you maybe could convert multi site environment back to single instance environment before your license goes to old?