All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are planning to integrate our SAP BTP Fiori app with Cisco AppDynamics and need some guidance. Could you please provide us with information on the following: The initial setup required in AppDy... See more...
We are planning to integrate our SAP BTP Fiori app with Cisco AppDynamics and need some guidance. Could you please provide us with information on the following: The initial setup required in AppDynamics for SAP Fiori apps. Any specific agents or SDKs we should use for monitoring Fiori apps. How to set up custom metrics and configure alerts for Fiori apps. Tips on troubleshooting common issues during integration. We appreciate any documentation, resources, or advice you can share to help us ensure a smooth integration.
@bpenny did you ever figure this out? I'm running into the exact same issue. I think the problem is that we're referencing a json path. If I move the timestamp to a top level json field in the event,... See more...
@bpenny did you ever figure this out? I'm running into the exact same issue. I think the problem is that we're referencing a json path. If I move the timestamp to a top level json field in the event, it picks up the timestamp just fine.
Hi according to the suggestion, I have tried out and successfully getting the expected behavior that only "admin" role users can have the options to do "Open in Search"/"Export" and etc... Amend the ... See more...
Hi according to the suggestion, I have tried out and successfully getting the expected behavior that only "admin" role users can have the options to do "Open in Search"/"Export" and etc... Amend the SPL search accordingly to what you desire to achieve <!-- Running searches and gaining role value for current user --> <search> <query>| rest /services/authentication/current-context | search username!=splunk-system-user | fields roles </query> <earliest>-15m</earliest> <latest>now</latest> <done> <eval token="search_visible">if($result.roles$=="admin","true","false")</eval> </done> </search> <!-- Running searches and gaining role value for current user --> <!-- selectively disable only exporting fucntion --> <!-- admin role can export, other roles can't  etc etc..--> <option name="link.exportResults.visible">$search_visible$</option>
Oh, in the future, timestamp issues will have to be resolved by restarting the instance. thank you @gcusello!
Hi @MVK1 , you can create your lookup using the Splunk Lookup Editor App (https://splunkbase.splunk.com/app/1724). Then you have to create your lookup definition [Settings > Lookups > Lookup Defini... See more...
Hi @MVK1 , you can create your lookup using the Splunk Lookup Editor App (https://splunkbase.splunk.com/app/1724). Then you have to create your lookup definition [Settings > Lookups > Lookup Definitions > Create New Definition]; in this job put attention to the other properties, if you don't want that the lookup is case sensitive. Then you can manually populate this lookup using the Lookup Editor or schedule a search to extract the FailureMsgs and store in the lookup using the outputlookup command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Outputlookup). Only one question: in your lookup you whould have product and Feature, but I don't see these information in the sample you shared, so, how would you have these information? Ciao. Giuseppe  
Hi @abi2023 , as @marnall said, you can create different apps and deploy to the UFs using different serverclasses. About data mascking, for my knowledge UFs enter only in the input phase, but the o... See more...
Hi @abi2023 , as @marnall said, you can create different apps and deploy to the UFs using different serverclasses. About data mascking, for my knowledge UFs enter only in the input phase, but the other phases (merge and parsing) are in the first full Splunk instance that data are passing through. In other words, in the Indexers or (when present) in the first Heavy Forwarder, but not in the UFs. If your doubt is that data are sent in clear mode, you can encrypt them between the UFs and the Indexers (or HFs), and then mask them on these other systems. Ciao. Giuseppe
Hi @satyaallaparthi , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Ka... See more...
Hi @satyaallaparthi , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @dongwonn , not all the configuration are reloaded with /debug/refresh. For this reason it's always better to restart Splunk. Ciao. Giuseppe
Hi @selvam_sekar , to identify the common clientids between the threee sourcetypes, you should run something like this: index=a sourcetype IN ("Cos","Ma","Ph") | stats count dc(sourcetype)... See more...
Hi @selvam_sekar , to identify the common clientids between the threee sourcetypes, you should run something like this: index=a sourcetype IN ("Cos","Ma","Ph") | stats count dc(sourcetype) AS sourcetype_count BY clientid | where sourcetype_count=3 | fields - sourcetype_count Ciao. Giuseppe  
I don't know why, but after applying the settings and restarting, the year value was set normally. [host::x.x.x.21] TIME_PREFIX = .... TIME_FORMAT = .... So far, I have reloaded the settings with... See more...
I don't know why, but after applying the settings and restarting, the year value was set normally. [host::x.x.x.21] TIME_PREFIX = .... TIME_FORMAT = .... So far, I have reloaded the settings with /debug/refresh, but this time I tried reloading the settings by restarting Splunk. Although the current operating environment is difficult to operate with just one server, is it possible that there may be cases where new settings are not reloaded?
If you do not have a 'group' value per event, then you can simply create one by putting | streamstats c as group before the first stats mentioned by @yuanliu 
Thank you @bowesmana  . I'm also considering implementing the select value in JavaScript . Appreciate your time.
Hello Everyone I have Splunk Enterprise installed in a Centos 7 linux OS I have added csv data and i wish to build a dashboard however when attempting to add a background image i am getting the 5... See more...
Hello Everyone I have Splunk Enterprise installed in a Centos 7 linux OS I have added csv data and i wish to build a dashboard however when attempting to add a background image i am getting the 503 -service unavailable error When running the status start  command this is what i get : Splunk> Finding your faults, just like mom. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history keepereu main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.2.0.1-d8ae995bf219-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at http://127.0.0.1:8000 to be available..................... When running the status command i get [root@localhost splunk]# /opt/splunk/bin/splunk status splunkd is running (PID: 2614). splunk helpers are running (PIDs: 2640 3079 3140 10111 10113 10144 10155 10172 10692 10808 11367 21735 24183). [root@localhost splunk]# What logs should i inspect to better understand why this is happening?    
This seems to be a bug but for me it started working on its own after like 12 hrs.
Can you please try appending below -  | makemv delim="," allowempty=t CmdArgAV   Please accept the solution and hit Karma, if this helps!
As you already experience, Splunk strongly disfavors join.  This is just natural as most noSQL do. So, you explained how many events these sources can give, and how many different client ID's.  What... See more...
As you already experience, Splunk strongly disfavors join.  This is just natural as most noSQL do. So, you explained how many events these sources can give, and how many different client ID's.  What you forget to tell us is what you mean by "to get the count/volume based on the client id".  If you only want to count events from each sourcetype by clientid, all you need to do is (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) ``` you can also use index=a sourcetype IN (Cos, Ma, Ph) ``` | stats count by clientid, sourcetype (which is a copy of the SPL snippet but added a pipe (|) in front of stats to make syntax correct.) There is no join.  The above will not timeout even with millions of event. In other words, what does "compare" mean in "to compare the same in another application", and what does the word mean in "to compare whether these clients ids are present in  another application"? If you want to know which and how many sourcetypes (apps) each clientid appear in, all you need is to add the following: | stats sum(count) as total values(sourcetype) as apps dc(sourcetype) as app_count by clientid Put together, (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) ``` you can also use index=a sourcetype IN (Cos, Ma, Ph) ``` | stats count by clientid, sourcetype | stats sum(count) as total values(sourcetype) as apps dc(sourcetype) as app_count by clientid Still no join.  Where do you get join to time out?
I cannot get a sense of this question. What is that data table at the beginning of this post supposed to be? Right before inputlookup, you have a stats command that reduces data fields to downtime... See more...
I cannot get a sense of this question. What is that data table at the beginning of this post supposed to be? Right before inputlookup, you have a stats command that reduces data fields to downtime and data.componentId.  I assume that everything above inputlook is working as expected.  If this is the case, please just post sample/mock values of downtime and data.componentId and ignore anything about app and input selection. (See below.) What fields (columns) are in this all_env_component.csv file? And how is this file useful to what you wanted in the end? What exact is it that you wanted in the end?  By this, I mean what does "4 multi select boxes" have to do with this question?  Your search does not use a single token.  This means that none of these selections should have any effect of results. In short, you need to post data input - you can post just sample/mock values downtime- data.componentId pairs; explain what is in that lookup file, provide some sample/mock values.  Then, explain what you are trying to do after that inputlookup, illustrate what your expected results look like from the sample/mock input values, and the logic between the input and desired results. These are the basis of an answerable question in a forum about data analytics.
@meetmshahthanks 'max_match=0' helped. but command keywords are separated by 'Enter'. is there any options to keep all words in one line?  
Hi, I have requirement as below, please could you review and suggest ? Need to pick up all client ids from application log called "Cos" (index=a sourcetype=Cos ) where distinct client ids are in 6M... See more...
Hi, I have requirement as below, please could you review and suggest ? Need to pick up all client ids from application log called "Cos" (index=a sourcetype=Cos ) where distinct client ids are in 6Millions. And, I want to compare whether these clients ids are present in  another application log called "Ma" (index=a sourcetype=Ma). And, I also want to compare the same in another application  called "Ph" (index=a sourcetype=Ph) Basically trying to get the count/volume based on the client id, which is common among the 3 application (Cos, Ma,Ph). The total events are in Millions and when i use join, the search job is getting auto-cancelled or getting terminated. (index=a sourcetype=Cos) OR (index=a sourcetype=Ma) OR (index=a sourcetype=Ph) stats count by clientid, sourcetype   Thanks, Selvam.
The solution becomes more obvious if I restate the problem like this: In addition to colors, you must have another field with four distinct values.  Let's call the additional field group, and give th... See more...
The solution becomes more obvious if I restate the problem like this: In addition to colors, you must have another field with four distinct values.  Let's call the additional field group, and give them values "a", "b", "c", and "d". colors group blue a blue a red a yellow b red b blue c red c blue c red d red d green d green d When data structure is clear, what you are asking is to Find values of colors that appear more than once with each group value. Count how many distinct values of group for each of duplicated values of colors. Hence,   | stats count by colors group | where count > 1 | stats dc(group) as duplicate_count by colors   Here is a data emulation you can play with and compare with real data   | makeresults format=csv data="colors,group blue,a blue,a red,a yellow,b red,b blue,c red,c blue,c red,d red,d green,d green,d" ``` data emulation above ```   String the two together, you get colors duplicate_count blue 2 green 1 red 1