All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think your understanding of your current scenario is all correct.  It's possible that Azure monitor has a way to create this new dimension there. So, when you export the metric through an integr... See more...
I think your understanding of your current scenario is all correct.  It's possible that Azure monitor has a way to create this new dimension there. So, when you export the metric through an integration like the one used by Splunk Observability Cloud, the custom DBNAME dimension is already there. That's just an idea--I don't know if Azure Monitor has a feature to do this or not, but it seems possible they might. Another possibility would be to collect this metric with an OpenTelemetry collector instead of using the Azure Cloud integration. There is a new OTel receiver being developed called azuremonitor.  https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/azuremonitorreceiver If you can collect this metric with OTel, then you can use an OTel processor to extract the short name of the database and add it to your metric using an OTel attributes processor. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/attributes-processor.html
Hi All, I have a challenge, which i after many considerations have made a decision to, which indeed also have some consequences.   I’m a (Splunk) consultant for a company who have hundreds of of... See more...
Hi All, I have a challenge, which i after many considerations have made a decision to, which indeed also have some consequences.   I’m a (Splunk) consultant for a company who have hundreds of of customers around the world, whom I finally convinced to get a dedicated Logging & Monitoring system - and long story short, after a longer PoC, Splunk was chosen.   Now to the challenge with all these customers, who pretty much all use more or less the same SW platform created by the company i work for, and which produces both Events and Metrics (why your app is in the picture).   To limit the massive amount of App management, along with GDPR and what not, each customer get ONE index defined as default, but each have 4 indexes, a set of summary indexes and likewise ordinary indexes - 1 event and 1 metrics in each set.   When installing the UF on each customer, each get one default (event) index set in inputs.conf, this way all Events ends up in the right index, but not Metrics. All indexes are following a strict naming convention in which an <customer id>_e_<some more> indicates ‘Events’ and vise versa _m_ their Metrics index.   So far so good!   Using the great app ‘Multi-Metric Perfmon’, and defining the index on the UF (very unwanted solution) data goes stright through the HF to the IDX server as expected. This solution will demand administration of individual apps per customer, which is a NO-GO.   Now - this raises the challenge, which I basically don’t understand why it becomes a challenge.   What I’ve done to circumvent this issue about multi-management hundreds of apps, is controlling everything by sourcetype, and let the HF do the switching of index between Event/Metrics depending on the incoming sourcetype.   So basically use props.conf to catch any sourcetype with ‘metrics’ in its name, and then use transforms.conf REGEX to change the index name from the default ‘<bla bla>_e_<bla>’ to ‘<bla bla>_m_<bla>’, which works perfect, except I get this error message in Splunk, and NO data in the index, as when the index is set directly on the HF (using the ‘Mutti-metric Parfmon’  inputs.conf to define the index name):   The metric event is not properly structured, source=LogicalDisk, sourcetype=Perfmon, host=w_00001_test_bjd_0001, index=c_00001_no_emea_m_pub. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values.   I’m far from a Splunk Metrics expert though I’ve worked intensively with Splunk for 10 years, metrics just never came my way till now.   So I don’t know what happens between the UF and the IDX, except that if (as said) defining the index on the UF in the app inputs.conf all works just fine.  Whereas if I don’t define an index in the apps inputs, it will go with the default index, which is an Event index, why I let the HF change the index name to its corresponding _m_ metrics index.   Using Splunk _internal Metrics I can see the data being transferred to the indexer using the correct index name, but here it stops, and I get above message.   Can you explain this behaviour? And more over how to fix this? What is happening on the HF - that I don’t see, since data is now rejected though pointed to the correct index?   You input and/or help would be most appreciated
hello @kiran_panchavat  thanks! I've been reviewing that post carefully, but I couldn't find a solution. Apparently they're talking about a custom script... I'll keep you posted if anyone has ever ... See more...
hello @kiran_panchavat  thanks! I've been reviewing that post carefully, but I couldn't find a solution. Apparently they're talking about a custom script... I'll keep you posted if anyone has ever found a solution to this case.
I have some html/css like below that sets the width of some single value panels.  In v8.3.1 this worked fine but now in v9.4.0 it does not work and sizes the panels evenly across the row.  IE - two s... See more...
I have some html/css like below that sets the width of some single value panels.  In v8.3.1 this worked fine but now in v9.4.0 it does not work and sizes the panels evenly across the row.  IE - two single value panels, each get 50%.  I have tried using the Developer Tools in Chrome but all the elements I try have no affect.     #panel1 { width: 20% !important } #panel2 { width: 20% !important }     Any thoughts?
If setting 200 parallel pipelines helped, that means existing pipeline thread(s) on IF (depending on how many pipelines )  maxed out. Checkout index=_internal source=*metrics.log host=<IF>   ra... See more...
If setting 200 parallel pipelines helped, that means existing pipeline thread(s) on IF (depending on how many pipelines )  maxed out. Checkout index=_internal source=*metrics.log host=<IF>   ratio thread=fwddatareceiverthread* | timechart span=30s max(ratio) by thread. If all are > .95 then you need to add more pipelines.
You may want to try In $SPLUNK_HOME/etc/splunk-launch.conf on IF. SPLUNK_LISTEN_BACKLOG = 512
Hello, Syslog is being sent to a UF and then to the Indexers. No HF to do parsing. Is what I am trying to accomplish possible using search time field extractions?
@gcusello may I know how you derive the calculation on the number of indexer for more than 1 TB volume?
i belive there's an issue with the website itself as i'm also not able to access it, give it sometime and try it later.
Contact Splunk Education (education_amer@splunk.com) for assistance.
I have an error doing my course in Splunk:   This application domain (https://education.splunk.com) is not authorized to use the provided PDF Embed API Client ID.
Adding to the valid points already raised by @gcusello , "changehost" is a name which is not very unlikely to repeat in other  apps so I'd check with btool whether something doesn't overwrite it by a... See more...
Adding to the valid points already raised by @gcusello , "changehost" is a name which is not very unlikely to repeat in other  apps so I'd check with btool whether something doesn't overwrite it by any chance. splunk btool transforms list changehost --debug That's one thing. Another one is - I'm never sure when you need to use WRITE_META and where you don't so I just to be on the safe side use it on all index-time extractions.  
Hi @Cheng2Ready  If you have a look in $SPLUNK_HOME/etc/system/default/savedsearches.conf - you can see some of the default values for items you're referring to, for example: action.email ... See more...
Hi @Cheng2Ready  If you have a look in $SPLUNK_HOME/etc/system/default/savedsearches.conf - you can see some of the default values for items you're referring to, for example: action.email = 0 action.populate_lookup = 0 action.rss = 0 action.script = 0 This ultimately means these arent configured, because if they were configured for a specific report/search/alert then the value would be updated to 1. Not all variables are alike - Developers who create and share their own alert actions might use different default values (e.g. blank instead of 0). Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Cheng2Ready , you wave two ways: insert al the dates to excude in the lookup, in this case you can use the above search; insert in the lookup only the holydays and run something like this: yo... See more...
Hi @Cheng2Ready , you wave two ways: insert al the dates to excude in the lookup, in this case you can use the above search; insert in the lookup only the holydays and run something like this: your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | fields date ] OR [ | inputlookup holidays.csv | eval date=strftime(strptime(date,"%Y-%m-%d")+86400)) | fields date ] | ... obviously in the lookup there must be a column called "date" and the format of the values must be "yyyy-mm-dd". Ciao. Giuseppe
Hi @boknows , it's correct to put the configurations in the local folder of your TA. What's the flow of your data? where do you receive data? these seem to be data received by syslog and ususlly t... See more...
Hi @boknows , it's correct to put the configurations in the local folder of your TA. What's the flow of your data? where do you receive data? these seem to be data received by syslog and ususlly they are received in an Heavy Forwarder, could you describe the flow of your data through the Splunk machines? In other words, I suppose that there's a syslog receiver, is it a Universal Forwarder or an Heavy Forwarder (a Splunk instance)? if it is an UF, between it and the Indexers, is there some other Splunk machine? if yes, it is an UF or an HF? At least if you're sure that there isn't any HF, put the add-on on the Indexers, otherwise on the first HF. Ciao. Giuseppe
@kiran_panchavat thank you! I followed the format of your search query and now I can see the data. Really appreciate your response and the education.
@swlf  HEC receives events via HTTP requests that may include a HEC token, channel identifier header, metadata, or event data formatted as raw text or JSON. https://docs.splunk.com/Documentation/Sp... See more...
@swlf  HEC receives events via HTTP requests that may include a HEC token, channel identifier header, metadata, or event data formatted as raw text or JSON. https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/FormateventsforHTTPEventCollector    The raw JSON is still stored in the _raw field. Try running a search like: Or, once you run the query you change your view from "List" to "Raw"        
@shashank9  Thanks for the update. It’s great that one of your Splunk receivers is now getting the logs as expected. Since the other receiver still isn’t showing data, I’d recommend a quick review o... See more...
@shashank9  Thanks for the update. It’s great that one of your Splunk receivers is now getting the logs as expected. Since the other receiver still isn’t showing data, I’d recommend a quick review of its configuration to see if there’s a missing or misconfigured detail. If the steps were helpful and you resolve the issue, feel free to accept the solution. Thanks again for your update!
I think there is an indexing delay in Splunk. I first index now shows the number of bytes indexed. But I still don't know where to find the raw data.  I've been navigating to the HEC page and clickin... See more...
I think there is an indexing delay in Splunk. I first index now shows the number of bytes indexed. But I still don't know where to find the raw data.  I've been navigating to the HEC page and clicking on the host which shows all the logs but not the raw data.
Hi @kiran_panchavat actually I accidentally terminated my ec2 instances in AWS and had to re launch them and re-install Splunk from scratch on all those instances and once I set them up and configure... See more...
Hi @kiran_panchavat actually I accidentally terminated my ec2 instances in AWS and had to re launch them and re-install Splunk from scratch on all those instances and once I set them up and configured the event routing to different Splunk receivers from my Heavy Forwarder I was able to see a specifc group of logs/events are sent to one of my Splunk receivers which is expected. I still could not see the data in my other Splunk receiver but I guess I just need to double check my configuration since it is working fine with one of the servers. Also, thank you for your time in guiding me through those steps to troubleshoot the issue.