All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi    i have registered for Splunk cloud and clicked start free trail, but still didn't receive the email with Splunk cloud free trail account details, like creds and link.
Hi @Nraj87 , in my opinion, you could use the searches that you can find in [Settings > License > License Usage > Last 30 days > Split by sourcetype] more than MLTK. eventually you could train a mo... See more...
Hi @Nraj87 , in my opinion, you could use the searches that you can find in [Settings > License > License Usage > Last 30 days > Split by sourcetype] more than MLTK. eventually you could train a model in MLTK starting from the previous search. Bur anyway, the most important activity is an analysis, starting from the above search so you can analyze your data flow and identify the sources responsible fo the data growth, so you can decide if enlarge the license or filter some events. Ciao. Giuseppe
Hi @M2024X_Ray , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma P... See more...
Hi @M2024X_Ray , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
@mpc7zh  OpenSSL 1.0.2zj-fips 30 Jan 2024
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. C... See more...
Dear  All , Some Dynamic Sources in my environment are ingesting more data into Splunk and License limit get breach. So Is there any way to detect this source as outliner through MLTK . i.e. Cisco ASA Source type  has multiple sources(firewall) which ingest around 10 GB data on daily basis  suddenly one day  license usage reach to 20 GB. how to identify which source sent more data into Splunk without creating manual threshold or average of data.
Thank you, case 3635939 has been created!
Adding to what @yeahnah and @isoutamo already said (yes, the zero-ingest license seems to be the way to go; otherwise you'll have a huge headache trying to make your all data available for a single S... See more...
Adding to what @yeahnah and @isoutamo already said (yes, the zero-ingest license seems to be the way to go; otherwise you'll have a huge headache trying to make your all data available for a single Splunk instance (to be honest while it is not explicitly stated anywhere I wouldn't be surprised if Splunk Free didn't support SmartStore; after all it's a relatively enterprise-level functionality) be very cautious not to get your environment to a state in which you have an expired license since then you must get an unlock license from the sales team. It's not enough to just upload a renewed license - once it's locked you have to unlock it "manually".
OK. A few things here and there. 1. The format for the cert file (for inputs and generally for all splunkd-related activity except the webui (which can be a bit confusing sometimes) is: <subject ce... See more...
OK. A few things here and there. 1. The format for the cert file (for inputs and generally for all splunkd-related activity except the webui (which can be a bit confusing sometimes) is: <subject cert (i.e. your forwarder or splunk server> <private key> <CA chain (if needed)> (all of them PEM-encoded) 2. If you don't want to authenticate forwarder with a cert there's no point of generating one for it. 3. The "SSL23_GET_CLIENT_HELLO:unknown protocol" message is a fairly generic one. Check the indexer's logs for anything regarding connection from the forwarder's IP. This should tell you more.  
OK. Back up a little. Where does this data come from? You seem to have multiple multivalued fields. That might be a problem because with Splunk there is no implied relationship between those fields ... See more...
OK. Back up a little. Where does this data come from? You seem to have multiple multivalued fields. That might be a problem because with Splunk there is no implied relationship between those fields whatsoever so values in one multivalued field do not have to be connected with values in another multivalued field. And their order doesn't need to match the order in another multivalued field. Take this for example: | makeresults format=csv data="a,b,c a,,c ,b,c a,b" It will give you this: a b c a   c   b c a b   But if you try to "squeeze" it into multivalued fields by doing | stats list(*) as * You'll get a b c a a b b c c These don't match the "layout" of the input data for the stats command. So be extremely cautious when handling multivalued fields because you might get completely different values from what you expect.
This is the SPL Magic i was missing Now i can have a basic understanding which indexes might be searched less frequently than others
Already did, unhelpful support - they literally told me to try and change wifi network. haven't had this kind of support for years.
<condition match="$Services$ == &quot;vpc&quot;"> </condition> <!--eval token="VPC_details">if(match($Services$, "vpc"), "true", "false")</eval> <set token="VPC_details"></set> <unset token="S3_deta... See more...
<condition match="$Services$ == &quot;vpc&quot;"> </condition> <!--eval token="VPC_details">if(match($Services$, "vpc"), "true", "false")</eval> <set token="VPC_details"></set> <unset token="S3_details"></unset> <unset token="EC2_details"></unset--> <condition> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;VPC_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> I tried this but it doesn't work
Hi @poojak2579 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Vfsglobal , I'm not sure that's possible to install Splunk on iPhone, but to install on MacOS you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/I... See more...
Hi @Vfsglobal , I'm not sure that's possible to install Splunk on iPhone, but to install on MacOS you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/InstallonMacOS Ciao. Giuseppe
Am new here I want to install and use slpunk on my iPhone and Mac, how do I install where do I start. 
I am facing the same issue.
Hello, Getting Action forbidden error when going to "https://<hostname>/en-US/app/search/analytics_workspace" on Splunk Cloud. Please note: I have logged with sc_admin role. Thanks
@vnravikumar  thanks for that  and it helped a lot, but I have multiple fields .  In that case for STATIC filters its working & coming to dynamic its reflecting the Field for Value name in the filte... See more...
@vnravikumar  thanks for that  and it helped a lot, but I have multiple fields .  In that case for STATIC filters its working & coming to dynamic its reflecting the Field for Value name in the filter of linked dashboard. Thanks in Advance.....  
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: ... See more...
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: I investigated the queue issue in different pipelines. This link explains in detail how to identify and fix queue problems to reduce delays: index=_internal host=* blocked=true This way, you can check whether the issue is with the universal forwarder, the heavy forwarder, or a higher tier. I experienced this issue with both UF and HF. I increased the queue size and added the following parameter along with the queue adjustment: /etc/system/local/server.conf parallelIngestionPipelines=2 https://conf.splunk.com/files/2019/slides/FN1570.pdf To adjust the max speed rate in the ingestion pipeline, I modified the following parameter in limits.conf: [thruput] maxKBps = 0 https://community.splunk.com/t5/Getting-Data-In/How-can-we-improve-universal-forwarder-performance/m-p/276195 The final and most effective step was changing the following parameter in UF’s inputs.conf: use_old_eventlog_api=true If you have added the parameter evt_resolve_ad_obj=true to translate SID/GUID and it cannot perform the translation, it will pass the task to the next domain controller. It waits for a response before proceeding, which can cause delays. To fix this, I added: evt_dc_name=localhost By implementing the above steps, logs were successfully received and indexed in real-time. Thank you for taking the time to read this. I hope it helps you resolve similar issues.
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: ... See more...
One of the issues I recently dealt with was the delay in sending security channel logs in Active Directory, which I finally resolved after a few days. Here are the steps I took to fix the problem: I investigated the queue issue in different pipelines. This link explains in detail how to identify and fix queue problems to reduce delays: index=_internal host=* blocked=true This way, you can check whether the issue is with the universal forwarder, the heavy forwarder, or a higher tier. I experienced this issue with both UF and HF. I increased the queue size and added the following parameter along with the queue adjustment: /etc/system/local/server.conf parallelIngestionPipelines=2 https://conf.splunk.com/files/2019/slides/FN1570.pdf To adjust the max speed rate in the ingestion pipeline, I modified the following parameter in limits.conf: [thruput] maxKBps = 0 https://community.splunk.com/t5/Getting-Data-In/How-can-we-improve-universal-forwarder-performance/m-p/276195 The final and most effective step was changing the following parameter in UF’s inputs.conf: use_old_eventlog_api=true If you have added the parameter evt_resolve_ad_obj=true to translate SID/GUID and it cannot perform the translation, it will pass the task to the next domain controller. It waits for a response before proceeding, which can cause delays. To fix this, I added: evt_dc_name=localhost By implementing the above steps, logs were successfully received and indexed in real-time. Thank you for taking the time to read this. I hope it helps you resolve similar issues.