All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Again - you are very vague about your needs. Also you might have chosen the solution badly - Splunk can do "realtime" but realtime searches have their limitations and are very resource intensive. To... See more...
Again - you are very vague about your needs. Also you might have chosen the solution badly - Splunk can do "realtime" but realtime searches have their limitations and are very resource intensive. To show you an analogy - it is as if you asked "what car should I buy that is most cost-effective? It must be red". We don't know what is it you need to do with that car, whether you need a sports car, a semi-truck or a bus, we don't know what is your reason for owning that car, but you want it to be cost-effective and be painted red. Depending on context, it could be a Mazda MX-5, a city bus or a Caterpillar 797 in red paint.
Setting up the lookup the way you described and using makeresults to generate events (rather than an index search) works for me as expected. So, perhaps your real data or lookup is inconsistent with... See more...
Setting up the lookup the way you described and using makeresults to generate events (rather than an index search) works for me as expected. So, perhaps your real data or lookup is inconsistent with the description you gave, or you have found a bug. Which version of Splunk are you using?
@isoutamo Hi, forgot about elastic, and separated index. need to send raw log via forwarder to splunk and create dashboard that work with metrics that exist in log. what is the most effective perfo... See more...
@isoutamo Hi, forgot about elastic, and separated index. need to send raw log via forwarder to splunk and create dashboard that work with metrics that exist in log. what is the most effective performance solution in splunk that work realtime and historical data? Need to load dashboard quickly and accurately e.g span in timechart is 1s.   FYI:data coming from several servers and it is lots of log lines in each second.    
Hi Ryan, Is it applicable for SAAS Controller as well? Thanks, Sikha
We are due to go line on the following Monday and we wanted to erase all of our Test mission control incidents so we have a clean slate, How is this possible?
"CEF:0|Bitdefender|GravityZone|6.35.1-1|35|Product Modules Status|5|BitdefenderGZModule=modules dvchost=xxx      BitdefenderGZComputerFQDN=xxxxx dvc=x.x.x.x deviceExternalId=xxxxx BitdefenderGZIsCont... See more...
"CEF:0|Bitdefender|GravityZone|6.35.1-1|35|Product Modules Status|5|BitdefenderGZModule=modules dvchost=xxx      BitdefenderGZComputerFQDN=xxxxx dvc=x.x.x.x deviceExternalId=xxxxx BitdefenderGZIsContainerHost=0 BitdefenderGZMalwareModuleStatus=enabled BitdefenderGZBehavioralScanAVCModuleStatus=enabled BitdefenderGZDataLossPreventionModuleStatus=disabled"   The logs are from Bitdefender and they show a time diff of +15 hrs. and there is no timestamp in logs no other source types from same HF show the behavior only bit-defender logs. All the help is appreciated to correct the time.
Use this script I have added below: open powershell ISE or powershell as administrator: ----------------------------- The path C:\splunk_install needs to be created and put msi and other files in i... See more...
Use this script I have added below: open powershell ISE or powershell as administrator: ----------------------------- The path C:\splunk_install needs to be created and put msi and other files in it  open Powershell ISE as admin and then run the script make sure the msi name is correct as per the msi you have up to versioning etc. it should be exact name. this works fine for most of people has been tested. ------------------------------- $srcAppPath = "C:\splunk_install\Splunk_TA_windows" $appPath = "C:\Program Files\SplunkUniversalForwarder\etc\apps" #installing forwarder Invoke-Command -ScriptBlock { & cmd /c msiexec /i "C:\splunk_install\plunkforwarder-9.1.2-b6b9c8185839-x64-release.msi" AGREETOLICENSE=Yes /quiet} Write-Host "Installation complete on $file" -ForegroundColor Green Write-Host "Validating install by checking if service is running." Get-Service -Name "SplunkForwarder" -ErrorAction SilentlyContinue Write-Host "SplunkForwarder service is Running on $file" -ForegroundColor Green Write-host "Copying necessary files for splunk ..." write-host "stopping splunk service" -ForegroundColor Green Stop-Service -Name SplunkForwarder #copying addons write-host "copying $srcAppPath" Copy-Item -Recurse -Path $srcAppPath -Destination $appPath -Force write-host "copy $srcAppPath complete" write-host "copying $opappsrc" Copy-Item -Recurse -Path $opappsrc -Destination $appPath -Force write-host "copy $opappsrc conmplete" Start-Service -Name SplunkForwarder Write-Host "Validating by checking if service is running." Get-Service -Name "SplunkForwarder" -ComputerName $file -ErrorAction SilentlyContinue Write-Host "SplunkForwarder service is Running on $file" -ForegroundColor Green write-host "Complete"
Logic that you could use could go something like this <your search> | appendpipe [| where <conditions for events you want to output> | outputlookup <your csv> | where false() ``` This rem... See more...
Logic that you could use could go something like this <your search> | appendpipe [| where <conditions for events you want to output> | outputlookup <your csv> | where false() ``` This removes all the events so that they are not appended to your main event pipeline ``` ] | where <conditions for events you want to keep i.e. not the events you wrote to the csv>
Hi as other have already said, with splunk there is no reason to split your index based on time as splunk storing data always as time series based and do this splitting by time automatically when is... See more...
Hi as other have already said, with splunk there is no reason to split your index based on time as splunk storing data always as time series based and do this splitting by time automatically when is store events into buckets. Also as said summary index means that you want to do some calculations (statistical summaries/functions) based on your data. If you can open your use case we probably could propose some best practices with it based on splunk? When you are refer to elastic search or rdbm vs. splunk, usually there is no sense to do it as Splunk works totally different way. Quite often those best practices (and what you must do with those) is almost worst case solutions with splunk. Usually there is much better way to handle those with splunk. r. Ismo
I agree with @PickleRick that don't use _ as a prefix for your own fields. I'm not sure if it even works or not? Also it's usually better to do that on search time not an ingest time. If you really... See more...
I agree with @PickleRick that don't use _ as a prefix for your own fields. I'm not sure if it even works or not? Also it's usually better to do that on search time not an ingest time. If you really need it then your solution should work as you show. One thing to remember is that you must put that props&transforms into 1st full splunk instance HF or Indexer from source to splunk indexers to get it working.
Hello Splunkers, I'm trying to send traces from an existing website that is built on top of Python (3.9.7) Django(4.1.3) and MySQL(8.0.32) hosted in linux to APM Observability. I'm having problems c... See more...
Hello Splunkers, I'm trying to send traces from an existing website that is built on top of Python (3.9.7) Django(4.1.3) and MySQL(8.0.32) hosted in linux to APM Observability. I'm having problems configuring via python instumentation. here are the steps I did using a virtual environment based on the splunk docs: installed open telemetry collector via curl script installed instumentation packages for python environment ran splunk-py-trace-bootstrap set environment vaiables (OTEL_SERVICE_NAME, OTEL_RESOURCE_ATTRIBUTES, OTEL_EXPORTER_OTLP_ENDPOINT, DJANGO_SETTINGS_MODULE) When I enable the splunk otel python agent this it is giving me the below error : Instrumenting of sqlite3 failed ModuleNotFoundError: No module named '_sqlite3' Failed to auto initialize opentelemetry ModuleNotFoundError: No module named '_sqlite3' Performing system checks... I've already tried reinstalling the sqlite3 and even downloaded from python repository the contents of sqlite3 and manually replaced it on the sqlite3 file but still cannot proceed. any help or direction would be very much apprecaited. thanks!
Hi Ryan, Please find the link: https://docs.appdynamics.com/appd/onprem/latest/en/events-service-deployment/install-the-events-service-on-linux Tell me where to find the helpful information, After... See more...
Hi Ryan, Please find the link: https://docs.appdynamics.com/appd/onprem/latest/en/events-service-deployment/install-the-events-service-on-linux Tell me where to find the helpful information, After 10 hours of troubleshooting we found that we need to set the following variables: export JAVA_HOME={Installation dir}/product/events-service/processor/jre/bin export INSTALL_BOOTSTRAP_MASTER_ES8=true Thanks for your consideration. Regards
As I wrote before - there is no such thing as "summary realtime". Depending on your particular usecase, you might create a fairly frequent (every 5 minutes? maybe even every minute if you have enough... See more...
As I wrote before - there is no such thing as "summary realtime". Depending on your particular usecase, you might create a fairly frequent (every 5 minutes? maybe even every minute if you have enough resources but you might run into problems with event lag) scheduled search summarizing your data. But there is no general solution here - it will depend on the particular requirements. Maybe the summary even isn't needed at all, maybe it's just a matter of properly searching the data you have. I don't know, you're very vague in describing your problem.
Im able to get that result but not getting logic to write filter data into lookup and later drop the event which written to lookup.
@PickleRick thanks, how about this part: 1-run splunk forwarder on client and logs send to splunk server, in each lines lots of data exist so need to create the summary index as soon as log received... See more...
@PickleRick thanks, how about this part: 1-run splunk forwarder on client and logs send to splunk server, in each lines lots of data exist so need to create the summary index as soon as log received and store summary of line on that summary index continuously realtime.    
Hi Guys,   We are collecting the Kubernetes logs using HEC on our Cloud splunk. When ever there is a ERROR entry in the logs , it will have a timestamp in the first line and later lines will be log... See more...
Hi Guys,   We are collecting the Kubernetes logs using HEC on our Cloud splunk. When ever there is a ERROR entry in the logs , it will have a timestamp in the first line and later lines will be logged as below one after one which will have a information about that error.         But when we see it in Splunk console these lines will be splitted as multiple events as below which is leading to confusion.       Is there anyway we can merge these particular lines as single events so that all lines related to any error should be visible as a single event. Please help on this.
1. I'm not sure if you can easily create fields with names beginning with underscore. I'm not saying you definitely can't but by convention they are internal Splunk's fields so I wouldn't be surprise... See more...
1. I'm not sure if you can easily create fields with names beginning with underscore. I'm not saying you definitely can't but by convention they are internal Splunk's fields so I wouldn't be surprised if you couldn't (or had problems accessing them later). 2. If you already have that info in the source field there is not much point in creating additional indexed field duplicating the value (I could agree that in some very rare cases there could be a use of such an indexed field if that info was stored in the raw event itself but since it's contained in the source which itself is an indexed field, there is not much point in just rewriting it elsewhere).
If you dig through the limits.conf file spec - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf you'll see there are several separate limits. Some aspects of subsearching can hit ... See more...
If you dig through the limits.conf file spec - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf you'll see there are several separate limits. Some aspects of subsearching can hit 10k results limit, others have default limit of 50k. If I remember correctly, the join command has a limit of 50k but the "direct subsearch" can only return 10k results.
Unless you have a very strange use case, there is hardly ever a need for splitting your data (of the same kind) into separate indexes only because you want to make searches faster. There are "two an... See more...
Unless you have a very strange use case, there is hardly ever a need for splitting your data (of the same kind) into separate indexes only because you want to make searches faster. There are "two and a half" (two basic and one more advanced) cases of when you need to split your data among indexes 1. Since you grant access to roles on a per-index basis, you need to split data into indexes if you want to differentiate access to that data separately 2. Retention policies (time to frozen, max index size and so on) are specified on a per-index basis so if you need to store some data longer than the rest, you send it to another index 3. You might want to split your data into separate indexes if you have separate "kinds" of data (different sourcetypes or data from different sources) which have greatly differing volume characteristics (for example, you're getting a huge load of flow data from your network devices and you get just a few alerts daily from your DAM solution - in such case you'd want to separate the DAM events from the network events so that you can search the DAM events quickly, not having to shovel through buckets of network data). But apart from that Splunk can handle huge loads of data pretty efficiently. And since Splunk stores data in buckets and searches only through buckets relevant to your search time range there is no need to split your data further into indexes just based on time - it would only make managing your data harder because you'd have to keep track of where your data is, which indexes you have to search from and so on.
@isoutamo  If i choose to extract these fields from file path and append those in ingest phase than below approach will work ? props.conf [source::/opt/airflow/logs/*/*/*/*.log] TRANSFORMS-set_ru... See more...
@isoutamo  If i choose to extract these fields from file path and append those in ingest phase than below approach will work ? props.conf [source::/opt/airflow/logs/*/*/*/*.log] TRANSFORMS-set_run_id = extract_run_id transform.conf [extract_run_id] INGEST_EVAL = _runid = mvindex(split(source,"/"),5)