All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @M2024X_Ray , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma P... See more...
Hi @M2024X_Ray , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
@mpc7zh  OpenSSL 1.0.2zj-fips 30 Jan 2024
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. C... See more...
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. Cisco ASA Source type  has multiple sources(firewall) which ingest around 10 GB data on daily basis  suddenly one day  license usage reach to 20 GB. how to identify which source sent more data into Splunk without creating manual threshold or average of data.
Thank you, case 3635939 has been created!
Adding to what @yeahnah and @isoutamo already said (yes, the zero-ingest license seems to be the way to go; otherwise you'll have a huge headache trying to make your all data available for a single S... See more...
Adding to what @yeahnah and @isoutamo already said (yes, the zero-ingest license seems to be the way to go; otherwise you'll have a huge headache trying to make your all data available for a single Splunk instance (to be honest while it is not explicitly stated anywhere I wouldn't be surprised if Splunk Free didn't support SmartStore; after all it's a relatively enterprise-level functionality) be very cautious not to get your environment to a state in which you have an expired license since then you must get an unlock license from the sales team. It's not enough to just upload a renewed license - once it's locked you have to unlock it "manually".
OK. A few things here and there. 1. The format for the cert file (for inputs and generally for all splunkd-related activity except the webui (which can be a bit confusing sometimes) is: <subject ce... See more...
OK. A few things here and there. 1. The format for the cert file (for inputs and generally for all splunkd-related activity except the webui (which can be a bit confusing sometimes) is: <subject cert (i.e. your forwarder or splunk server> <private key> <CA chain (if needed)> (all of them PEM-encoded) 2. If you don't want to authenticate forwarder with a cert there's no point of generating one for it. 3. The "SSL23_GET_CLIENT_HELLO:unknown protocol" message is a fairly generic one. Check the indexer's logs for anything regarding connection from the forwarder's IP. This should tell you more.  
OK. Back up a little. Where does this data come from? You seem to have multiple multivalued fields. That might be a problem because with Splunk there is no implied relationship between those fields ... See more...
OK. Back up a little. Where does this data come from? You seem to have multiple multivalued fields. That might be a problem because with Splunk there is no implied relationship between those fields whatsoever so values in one multivalued field do not have to be connected with values in another multivalued field. And their order doesn't need to match the order in another multivalued field. Take this for example: | makeresults format=csv data="a,b,c a,,c ,b,c a,b" It will give you this: a b c a   c   b c a b   But if you try to "squeeze" it into multivalued fields by doing | stats list(*) as * You'll get a b c a a b b c c These don't match the "layout" of the input data for the stats command. So be extremely cautious when handling multivalued fields because you might get completely different values from what you expect.
This is the SPL Magic i was missing Now i can have a basic understanding which indexes might be searched less frequently than others
Already did, unhelpful support - they literally told me to try and change wifi network. haven't had this kind of support for years.
<condition match="$Services$ == &quot;vpc&quot;"> </condition> <!--eval token="VPC_details">if(match($Services$, "vpc"), "true", "false")</eval> <set token="VPC_details"></set> <unset token="S3_deta... See more...
<condition match="$Services$ == &quot;vpc&quot;"> </condition> <!--eval token="VPC_details">if(match($Services$, "vpc"), "true", "false")</eval> <set token="VPC_details"></set> <unset token="S3_details"></unset> <unset token="EC2_details"></unset--> <condition> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;VPC_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> I tried this but it doesn't work
Hi @poojak2579 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Vfsglobal , I'm not sure that's possible to install Splunk on iPhone, but to install on MacOS you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/I... See more...
Hi @Vfsglobal , I'm not sure that's possible to install Splunk on iPhone, but to install on MacOS you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/InstallonMacOS Ciao. Giuseppe
Am new here I want to install and use slpunk on my iPhone and Mac, how do I install where do I start. 
I am facing the same issue.
Hello, Getting Action forbidden error when going to "https://<hostname>/en-US/app/search/analytics_workspace" on Splunk Cloud. Please note: I have logged with sc_admin role. Thanks
@vnravikumar  thanks for that  and it helped a lot, but I have multiple fields .  In that case for STATIC filters its working & coming to dynamic its reflecting the Field for Value name in the filte... See more...
@vnravikumar  thanks for that  and it helped a lot, but I have multiple fields .  In that case for STATIC filters its working & coming to dynamic its reflecting the Field for Value name in the filter of linked dashboard. Thanks in Advance.....  
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: ... See more...
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: I investigated the queue issue in different pipelines. This link explains in detail how to identify and fix queue problems to reduce delays: index=_internal host=* blocked=true This way, you can check whether the issue is with the universal forwarder, the heavy forwarder, or a higher tier. I experienced this issue with both UF and HF. I increased the queue size and added the following parameter along with the queue adjustment: /etc/system/local/server.conf parallelIngestionPipelines=2 https://conf.splunk.com/files/2019/slides/FN1570.pdf To adjust the max speed rate in the ingestion pipeline, I modified the following parameter in limits.conf: [thruput] maxKBps = 0 https://community.splunk.com/t5/Getting-Data-In/How-can-we-improve-universal-forwarder-performance/m-p/276195 The final and most effective step was changing the following parameter in UF’s inputs.conf: use_old_eventlog_api=true If you have added the parameter evt_resolve_ad_obj=true to translate SID/GUID and it cannot perform the translation, it will pass the task to the next domain controller. It waits for a response before proceeding, which can cause delays. To fix this, I added: evt_dc_name=localhost By implementing the above steps, logs were successfully received and indexed in real-time. Thank you for taking the time to read this. I hope it helps you resolve similar issues.
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: ... See more...
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: I investigated the queue issue in different pipelines. This link explains in detail how to identify and fix queue problems to reduce delays: index=_internal host=* blocked=true This way, you can check whether the issue is with the universal forwarder, the heavy forwarder, or a higher tier. I experienced this issue with both UF and HF. I increased the queue size and added the following parameter along with the queue adjustment: /etc/system/local/server.conf parallelIngestionPipelines=2 https://conf.splunk.com/files/2019/slides/FN1570.pdf To adjust the max speed rate in the ingestion pipeline, I modified the following parameter in limits.conf: [thruput] maxKBps = 0 https://community.splunk.com/t5/Getting-Data-In/How-can-we-improve-universal-forwarder-performance/m-p/276195 The final and most effective step was changing the following parameter in UF’s inputs.conf: use_old_eventlog_api=true If you have added the parameter evt_resolve_ad_obj=true to translate SID/GUID and it cannot perform the translation, it will pass the task to the next domain controller. It waits for a response before proceeding, which can cause delays. To fix this, I added: evt_dc_name=localhost By implementing the above steps, logs were successfully received and indexed in real-time. Thank you for taking the time to read this. I hope it helps you resolve similar issues.
Hi @inventsekar, 1) As I recall, I only generated the lookup CSV file for testing in Tamil. An all-language lookup might be size prohibitive. The best SPL-based workaround to count all graphemes see... See more...
Hi @inventsekar, 1) As I recall, I only generated the lookup CSV file for testing in Tamil. An all-language lookup might be size prohibitive. The best SPL-based workaround to count all graphemes seems to be an eval expression using the \X regular expression token to match Unicode sequences. The simplest expression was: | eval count=len(replace(field, "\\X", "x")) 2) The external lookup allowed programmatic access to Python modules or any other library/program if called from an arbitrary script. The example returned a Unicode character category, but the subsequent counting solution wasn't comprehensive. In Bash, calculating the number of characters may be as simple as: echo ${#field} => 58 but this suffers the same problem as our earlier efforts by not taking into account marks and code sequences used to generate graphemes. Is Perl better? perl -CS -lnE 'say length' <<<${field} => 58 As before, the length is incorrect. I'm not a Perl expert, but see https://perldoc.perl.org/perlunicode: The only time that Perl considers a sequence of individual code points as a single logical character is in the \X construct .... That leads us to: perl -CS -lnE 's/\X/x/g; say length' <<<${field} => 37 There may be a better native Perl, Python, etc. solution, but calling an external program is more expensive than the equivalent SPL. 3) If you only need to count graphemes, I would use the eval command. What other use cases did you have in mind?
I don't quite follow your logic, but your solution will probably require mv eval functions and/or foreach. e.g. you can find the Category index into your LookupCategory something like this | eval c... See more...
I don't quite follow your logic, but your solution will probably require mv eval functions and/or foreach. e.g. you can find the Category index into your LookupCategory something like this | eval c=0 | foreach mode=multivalue LookupCategory [ eval mv_match=case(Category=<<ITEM>>, c, Category><<ITEM>>, -c, true(), mv_match), c=c+1 ] i.e a positive result of mv_match means the MV index of an exact match (offsets from 0). A negative mv_match result indicates the last LookupCategory that Category was > than and an empty result means Category was never greater than any LookupCategory. Then with that knowledge you can mvindex() the other MV values based on your needs, e.g. abs(mv_match)