All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for your reply. So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rathe... See more...
Thank you for your reply. So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rather than the source? If so I guess this rules out connectivity issue. Lastly, how have you configured the other data sources to send from the indexers to Qradar? Please share config examples of how you've achieved this so we can see if there is an issue here.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
 Your architecture: UF-->IDX-->SH :Two sites with CM/LM cluster : each site has 1 IDX, SH, CM, LM, for standby site CM/LM splunk service is stopped. 2) Your configuration pertaining to data inge... See more...
 Your architecture: UF-->IDX-->SH :Two sites with CM/LM cluster : each site has 1 IDX, SH, CM, LM, for standby site CM/LM splunk service is stopped. 2) Your configuration pertaining to data ingestion and data flow.: We are using as indexer to send the data to 3rd party, all the data is received at remote end except the Splunk win components, also able to send indexer server logs to 3rd party.
As far as I remember (but I'm no Cloud expert so you can double-check it) when subscribing to Splunk Cloud you have a choice between AWS and GCP hosting. And, to add to this confusion if you don'... See more...
As far as I remember (but I'm no Cloud expert so you can double-check it) when subscribing to Splunk Cloud you have a choice between AWS and GCP hosting. And, to add to this confusion if you don't want Splunk to manage whole infrastructure for you (it has its pros and its cons) you can also just deploy your own "on-premise" Splunk Enterprise environment on your own cloud of choice VM instances. But this has nothing to do with Splunk Cloud. It would still be Splunk Enterprise.
Not necessarily. You can use an output of a function operating on _raw as argument to the lookup() function.
Ok. Wait. You're asking about something not working in a relatively unusual setup. So firstly describe with details: 1) Your architecture 2) Your configuration pertaining to data ingestion and da... See more...
Ok. Wait. You're asking about something not working in a relatively unusual setup. So firstly describe with details: 1) Your architecture 2) Your configuration pertaining to data ingestion and data flow. Without it we have no knowledge about your environment, we don't know what is working and what is not and what did you configure and where in an attempt to make it work and everybody involved will only waste time ping-ponging questions trying to understand your issue.
Hi ,   Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? : 3rd party (Qradar) You say you can see the data on your SH, when you search it plea... See more...
Hi ,   Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? : 3rd party (Qradar) You say you can see the data on your SH, when you search it please check the splunk_server field from the interesting fields on the left, is the server(s) listed here your indexers, or SH?;  Indexers How have you configured the connectivity to the 3rd party?:; yes its forwarding other syslogs successfully
Hi @randoj !  We just created a lookup definition manually in a local/transforms.conf, as you would with any other KV Store lookup. Additionally, we needed to do the same for the mc_incidents col... See more...
Hi @randoj !  We just created a lookup definition manually in a local/transforms.conf, as you would with any other KV Store lookup. Additionally, we needed to do the same for the mc_incidents collection, as it is needed to correlate notable_ids and incident_ids, the latter of which are used in mc_notes. It probably is easier to access the collections using the Python SDK and scripts, but this solution worked for us and required less setup. Hope this helps!
Can someone please guide me on this.  
Hi @malisushil119  To ensure we can answer thoroughly, please could you confirm a few things. Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? ... See more...
Hi @malisushil119  To ensure we can answer thoroughly, please could you confirm a few things. Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party?  You say you can see the data on your SH, when you search it please check the splunk_server field from the interesting fields on the left, is the server(s) listed here your indexers, or SH? How have you configured the connectivity to the 3rd party? Please could you check your _internal logs for any TcpOutputFd errors (assuming standard Splunk2Splunk forwarding).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is no... See more...
We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is not forwarding them to 3rd party.
Hi @malisushil119 , don't attach a new post to another one, even if on the same topic because you'll receive a faster and probably better answer. Ciao. Giuseppe
To ensure Splunk fully reindexes a file whenever the datestamp changes, consider using initCrcLength and crcSalt in your inputs.conf. The default CHECK_METHOD = modtime may not detect content changes... See more...
To ensure Splunk fully reindexes a file whenever the datestamp changes, consider using initCrcLength and crcSalt in your inputs.conf. The default CHECK_METHOD = modtime may not detect content changes if the file is overwritten with similar data. Including a unique timestamp in the file or path can also help.        
Hi All We have a requirement where user needs to send mail a dashboard periodically. The Dashboard is made using Dashboard studio so the Export is available, I configured the export option and sent ... See more...
Hi All We have a requirement where user needs to send mail a dashboard periodically. The Dashboard is made using Dashboard studio so the Export is available, I configured the export option and sent a mail but the PDF output showing no data on individual panels, it gives the output while the panel are searching for the result. The dashboard has  time picker in it, no matter which value I set it ( last 4 hours to last 30 days) the result is same. Has anybody faced the issue similarly, have any workaround is there for this.   Please help.
i am facing same issue
When we delete a row in a csv lookup file, it gets deleted for that moment. But on saving, that row re-appears. Looks like a bug in latest version 4.0.5, working perfectly fine in 4.0.4 version. Upgr... See more...
When we delete a row in a csv lookup file, it gets deleted for that moment. But on saving, that row re-appears. Looks like a bug in latest version 4.0.5, working perfectly fine in 4.0.4 version. Upgrading to 4.0.5 because of vulnerabilities in 4.0.4. Anyone noticed  this issue?
@tah7004  To use ingest-time lookup, the field you want to apply must be specified as an indexed-field. You can apply it successfully by configuring the configuration file as follows. 1. $SPLUNK_HOM... See more...
@tah7004  To use ingest-time lookup, the field you want to apply must be specified as an indexed-field. You can apply it successfully by configuring the configuration file as follows. 1. $SPLUNK_HOME/etc/apps/myapp/lookups/test.csv field1,field2,field3 value1,value2,value3 2. $SPLUNK_HOME/etc/apps/myapp/local/props.conf [test_ingest_lookup] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true TRANSFORMS-ingest_time_lookup = regex_extract_av_pairs, lookup_extract   3. $SPLUNK_HOME/etc/apps/myapp/local/transforms.conf [regex_extract_av_pairs] SOURCE_KEY = _raw REGEX = \s([a-zA-Z][a-zA-Z0-9-]+)=([^\s"',]+) REPEAT_MATCH = true FORMAT = $1::"$2" WRITE_META = true [lookup_extract] INGEST_EVAL= field3=json_extract(lookup("test.csv", json_object("field1", new_field, "field2", field2), json_array("field3")),"field3")   You can refer to another solution using INDEXED_EXTRACTIONS=json in the link below. - Splunkデータ取り込み時の絞り込み方法(リストマッチ) https://qiita.com/chobiyu/items/aec5ef3a75a8bab96546
Splunk as a software running on top of the OS doesn't have any privilege to choose between the swap and real memory as it's purely decided by the OS. There used to be many swap issues in Linux whic... See more...
Splunk as a software running on top of the OS doesn't have any privilege to choose between the swap and real memory as it's purely decided by the OS. There used to be many swap issues in Linux which could be better addressed or explained by the Vendor Support. Frequent swap access could impact the Splunk performance negatively - you may want to control 'swappiness' with the help of OS admin. https://www.techtarget.com/searchdatacenter/definition/Linux-swappiness  FYI.
Hi, I recently created a dash studio dashboard and I see while creating the dashboard the dashboard title and widget title are in one font format but once I finished my dashboard and shared it publi... See more...
Hi, I recently created a dash studio dashboard and I see while creating the dashboard the dashboard title and widget title are in one font format but once I finished my dashboard and shared it publicly I get one public URL which I see has a different font format when opened.  First snap is having the normal font format on which I created the dashboard Normal dashboard font format Once I open the shared URL the font looks like as below. Please help on how to restore it to original font.    Dashboard opened via shared URL  
Hi After updating to version 8.x, do I need to create new indexes? Please advise. Is there any documentation for this? @inveinvestigation #index
@sreeranjan wrote: We are currently working with the Splunk Enterprise product. The client has informed us that we will be transitioning to Splunk Cloud. From what I understand, Splunk Cloud r... See more...
@sreeranjan wrote: We are currently working with the Splunk Enterprise product. The client has informed us that we will be transitioning to Splunk Cloud. From what I understand, Splunk Cloud refers to the Splunk Cloud Platform, where the entire infrastructure is hosted and managed by Splunk on AWS. Even though it runs on AWS, it's still referred to as Splunk Cloud—not AWS Cloud—since the architecture and services are maintained by Splunk. Is that correct? It’s exactly this way. Usually when we are talking about splunk cloud it means just splunk core platform in cloud. That cloud can be in aws, azure or gcp. Then there are classic and Victoria experiences over it. This user point of view this means which kind of options it have e.g. for deployment apps etc. you can see those from splunk cloud description from docs.splun.com. With SCP your could expand your environment with edge or ingest processor which helps you with data ingestion configurations.