All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm planning to move the MC role to the new server, however it needs to be configured as a search head to a multisite indexer cluster first. As far as I can gather from the documentation, https://doc... See more...
I'm planning to move the MC role to the new server, however it needs to be configured as a search head to a multisite indexer cluster first. As far as I can gather from the documentation, https://docs.splunk.com/Documentation/Splunk/8.2.2/Indexer/MultisiteCLI#Configure_the_search_heads I just need to run the CLI command mentioned there on the new host, changing the master URI to my current master. However, after doing this and restarting Splunk on the new host, do I actually to go into my old host and set the MC mode to standalone before restarting or will that automatically be done when I continue setting up Distributed mode on the new host? And finally, when do the clustered indexers get picked up as search peers in the MC, given I don't add them manually?  
Hello, I'm am wondering how other security service providers have handled this issue or what is best practice To plan for least privilege, indexes would be separated out by group. We could store al... See more...
Hello, I'm am wondering how other security service providers have handled this issue or what is best practice To plan for least privilege, indexes would be separated out by group. We could store all data related to the group in a respective index. Network traffic, network security, antivirus, Windows Event Data, etc all in a single index for the group and give that group permissions to the index. An issue with this scenario is search performance. Searches may be performed on network traffic. Or on host data. Or on antivirus data. But Splunk will have to search the bucket containing all of the other unrelated data. If antivirus data is only producing 5GB a day, but network traffic is producing 10TB a day, this will have a huge negative affect on searches for antivirus data. This will be compounded with SmartStore (S2) where IOPS will be used to write the bucket back to disk. If least privilege isn't an issue, it would be optimal to create a bucket for the specific data. Network traffic would have it's own index. Windows hosts would have their own index. But the crux of architecting in this fashion is how to implement least privilege. One group cannot be able to see the host data of the other group. One idea to get around this is to limit the search capability by host, but that would require much work from the Splunk team and is not 100% certain if wildcards are used. Another idea is to simply create a separate index for each data type for each group. My concern with this is scaling. If we have 10 separate groups that require 10 indexes, that's 100 indexes. If we have 50, that's 500 indexes. 100 is 1000. This does not scale well. Thank you in advance for your help    
I'm have the same problem.  Multiple VMs with Stream that have been working, but they all now fail with "unrecognized link layer for this device <eth1> 253".   Does the current version no longer supp... See more...
I'm have the same problem.  Multiple VMs with Stream that have been working, but they all now fail with "unrecognized link layer for this device <eth1> 253".   Does the current version no longer support link layer virtualization
Sample data: <?xml version="1.0" encoding="UTF-8" ?> <Results xmlns:xsi="http://www.w3.org"> <Result> <Code>OK</Code> <Details>LoadMessageOverviewData</Details> <Text>Successful</Text> </Resul... See more...
Sample data: <?xml version="1.0" encoding="UTF-8" ?> <Results xmlns:xsi="http://www.w3.org"> <Result> <Code>OK</Code> <Details>LoadMessageOverviewData</Details> <Text>Successful</Text> </Result> <Data> <ColumnNames> <Column>Sender&#x20;Component</Column> <Column>Receiver&#x20;Component</Column> <Column>Interface</Column> <Column>System&#x20;Error</Column> <Column>Waiting</Column> </ColumnNames> <DataRows> <Row> <Entry>XYZ</Entry> <Entry>ABC</Entry> <Entry>Mobile</Entry> <Entry>-</Entry> <Entry>3</Entry> </Row> </DataRows> </Data> </Results> Hello, I need to extract fields from the above xml data. I have tried the below props, but still the data is not extracting properly. Props.conf CHARSET=UTF-8 BREAK_ONLY_BEFORE = <\/Row> MUST_BREAK_AFTER = <Row> SHOULD_LINEMERGE  = true KV_MODE = xml pulldown_type = true DATETIME_CONFIG = CURRENT NO_BINARY_CHECK=true TRUNCATE=0 description=describing props config disabled=false How to parse the data.? Thanks in advance
Hi to everyone,  For a project, I need to deploy a test environnement with splunk and I need to capture stream log in order to to analyze it. For this project I have deployed a Splunk enterprise (9.... See more...
Hi to everyone,  For a project, I need to deploy a test environnement with splunk and I need to capture stream log in order to to analyze it. For this project I have deployed a Splunk enterprise (9.1.2) on an ubuntu 20.04 and on another VM (also ubuntu 20.04) I put my UF (9.1.2). In the UF I put the add-on Splunk Add-on for Stream Forwarders (8.1.1) to capture packet and on my splunk enterprise Splunk App for Stream (8.1.1).  I follow all installations and configurations steps and debug some issues but I still have an error that I don't know how to fix it. In the streamfwd.log files I see this error :  2024-01-24 06:14:03 ERROR [140599052777408] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <ens33>: 253 2024-01-24 06:14:03 FATAL [140599052777408] (CaptureServer.cpp:2337) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer ens33 is the right interface where I want to capture stream packet but I don't understand why it don't recognize it. If you have any idea I will be very gratefull.  
Hi Delmah, You may find this reference helpful: https://docs.splunk.com/Documentation/AddOns/released/VMW/Installationoverview
Many Thanks @ITWhisperer . In this SPL Logic how do we ignore the weekend dataand bring only the last working day count for yesterday ? is it possible ?
Hi - and thanks a lot - I will try focusing on that part and investigate further
Hey everyone, I am in the situation where I have to provide a solution to a client of mine. Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into... See more...
Hey everyone, I am in the situation where I have to provide a solution to a client of mine. Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into a splunk index, let's call the index "standardIndex". Due to a change in legislation and a change in how they operate under this legislation change, we need to log specific logs based on the message content (easiest for us..) to a special index we can call "specialIndex". I managed to rewrite the messages we log, to satisfy their needs in that regard, but now I fail to log those to a separate index. The collectord annotations I put in our patch look like this, and they seem to work just fine:       spec: replicas: 1 template: metadata: annotations: collectord.io/logs-replace.1-search: '"message":"(?P<message>Error while doing the special thing\.).*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.1-val: '${timestamp} message="${message}" applicationid=superImportant status=failed' collectord.io/logs-replace.2-search: '"message":"(?P<message>Starting to do the thing\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.2-val: '${timestamp} message="${message}" applicationid=superImportant status=pending' collectord.io/logs-replace.3-search: '"message":"(?P<message>Nothing to do but completed the run\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.3-val: '${timestamp} message="${message}" applicationid=superImportant status=successful' collectord.io/logs-replace.4-search: '("message":"(?P<message>Deleted \d+ of the thing [^\s]+ where type is [^\s]+ with id)[^"]*").*"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.4-val: '${timestamp} message="${message} <removed>" applicationid=superImportant status=successfull'       My only remaining goal is to send these specific messages to a specific index, and this is where I can't follow the outcold documentation really well. Actually, I am even doubting this is possible but I fail to understand it completely. Does anyone have a hint?
The steps are the same, except for the Windows-specific parts.  For instance, there's no need to use LAUNCHSPLUNK=0.  Use splunk enable boot-start rather than the Service Control Panel. Submit feedb... See more...
The steps are the same, except for the Windows-specific parts.  For instance, there's no need to use LAUNCHSPLUNK=0.  Use splunk enable boot-start rather than the Service Control Panel. Submit feedback on the documentation to ask the Docs team to add Linux instructions.
This is a Splunk-supported add-on.  You should send this information to Splunk Support.  There is little we in the Community can do.
Assuming these are counts, you need to get values for Today and Yesterday into the same event in the pipeline. Try something like this basesearch earliest=@d latest=now | append [ search earliest=-1... See more...
Assuming these are counts, you need to get values for Today and Yesterday into the same event in the pipeline. Try something like this basesearch earliest=@d latest=now | append [ search earliest=-1d@d latest=-1d] | eval Consumer = case(match(File_Name,"^ABC"), "Down", match(File_Name,"^csd"),"UP", match(File_Name,"^CSD"),"UP",1==1,"Others") | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | stats count by Name Consumer Day | eval {Day}=count | fields - Day | stats values(Today) as Today values(Yesterday) as Yesterday by Name Consumer | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Consumer Today Yesterday percentage_variance
We encounter an issue with our iislogs in an azure storage account. Our logging data becomes duplicated at the end of the hour when the last modified of a closed log file is updated. This is caused b... See more...
We encounter an issue with our iislogs in an azure storage account. Our logging data becomes duplicated at the end of the hour when the last modified of a closed log file is updated. This is caused by a known bug in the azure extension that we are using and which we cannot update. However the behaviour of the plugin causes the duplication in logs. An example error can be seen below: 2024-01-24 12:03:09,811 +0000 log_level=WARNING, pid=7648, tid=ThreadPoolExecutor-1093_9, file=mscs_storage_blob_data_collector.py, func_name=_get_append_blob, code_line_no=301 | [stanza_name="prd10915-iislogs" account_name="prd10915logs" container_name="iislogs" blob_name="WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log"] Invalid Range Error: Bytes stored in Checkpoint : 46738047 and Bytes stored in WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log : 46738047. Restarting the data collection for WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log  The error happens in the %SPLUNK_HOME%\etc\apps\Splunk_TA_microsoft-cloudservices\lib\mscs_storage_blob_data_collector.py file on line 280. The blob stream downloader expects more bytes than the known checkpoint and produces an exception when the amount of bytes is the same. This exception is then handled by this piece of code: blob_stream_downloader = blob_client.download_blob( snapshot=self._snapshot ) blob_content = blob_stream_downloader.readall() self._logger.warning( "Invalid Range Error: Bytes stored in Checkpoint : " + str(received_bytes) + " and Bytes stored in " + str(self._blob_name) + " : " + str(len(blob_content)) + ". Restarting the data collection for " + str(self._blob_name) ) first_process_blob = True self._ckpt[mscs_consts.RECEIVED_BYTES] = 0 received_bytes = 0 Where the blob is marked as new and fully redownloaded and ingested. Causing our data duplication. We would like to request a change to the addon that prevents this behaviour from happening when the checkpoint byte count is equal to the log file byte count. The addon should not assume that a file has grown in size if the last modified timestamp is changed.
Hi Team, My requirement is Universal Forwarder installation on Kubernetes On-premises system.   Please send me guide on installation on Kubernetes.       
Hi, I have the below SPL and I am not able to get the expected results. Please could you help? if i use stats count by - then i'm not getting the expected result as below. SPL: basesearch earlies... See more...
Hi, I have the below SPL and I am not able to get the expected results. Please could you help? if i use stats count by - then i'm not getting the expected result as below. SPL: basesearch earliest=@d latest=now | append [ search earliest=-1d@d latest=-1d] | eval Consumer = case(match(File_Name,"^ABC"), "Down", match(File_Name,"^csd"),"UP", match(File_Name,"^CSD"),"UP",1==1,"Others") | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Consumer Today Yesterday percentage_variance Expected Result: Name Consumer Today Yesterday percentage_variance TEN UP 10 10 0.0%
We want to install splunk in our golden image using packer .This is for deploying servers using golden images in Azure for RHEL8 and Ubuntu22. I found documentation for Windows Integrate a univer... See more...
We want to install splunk in our golden image using packer .This is for deploying servers using golden images in Azure for RHEL8 and Ubuntu22. I found documentation for Windows Integrate a universal forwarder onto a system image - Splunk Documentation  Not for RHEL/UBUNTU  Any help appreciated.
Hi Everyone, Ours is a small environment with 2SHs and 3 indexers, recently after a resync on the SH cluster i see the below error and the SH seems to be very slow. Is there a way to sort this out? ... See more...
Hi Everyone, Ours is a small environment with 2SHs and 3 indexers, recently after a resync on the SH cluster i see the below error and the SH seems to be very slow. Is there a way to sort this out? This is the error/warning message i see in the  MC and below is the error while running adhoc searches "Gave up waiting for the captain to establish a common bundle version across all search peers; using most recent bundles on all peers instead"  @rbal_splunk  Looks like you have already answered this, can you pls help here
@PickleRick  I'm using splunk enterprise. I wasn't sure of the best approach here, sounds like I can use events, not sure how I can go about doing this but I'll do more research. 
Hello, for a dashboard the user want every time when he opens the dashboard that the canvas size is fit to his screen. How can i define this ?
Looks like your search may be wrong - please share the source of your dashboard in a code block