All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I'm new to the regex, can someone please help me in regex to extract file name and file path separately in the data model.  Field value is variable in the fields file name and file path. Tha... See more...
Hi,  I'm new to the regex, can someone please help me in regex to extract file name and file path separately in the data model.  Field value is variable in the fields file name and file path. Thank you. Below is the sample data. "evidence": [{"entityType": "File", "evidenceCreation Time": "2022-12-19T10:43:56.51Z", "sha1": "336466254f9fe9b5a09f27848317525481dd5dd6", "sha256": "59de220b8d7961086e8d2d1fde61b71a810a32f78a9175f1f87ecacd692b85c9", "fileName": "Nero-8.1.1.0b_fra_trial.exe", "filePath": "F:\\Desktop new backup\\Musique \\Nero 8", "processId": null, "process CommandLine": null, "processCreation Time": null, "parentProcessId":
Disponemos de Splunk Cloud Victoria 9.0.2208.4 y hemos instalado y configurado: - Seguridad en la nube de Cisco  - Complemento de paraguas de seguridad en la nube de Cisco Seguimos los pasos pa... See more...
Disponemos de Splunk Cloud Victoria 9.0.2208.4 y hemos instalado y configurado: - Seguridad en la nube de Cisco  - Complemento de paraguas de seguridad en la nube de Cisco Seguimos los pasos para instalar pero no tenemos datos. Con esta consulta tenemos estos errores de registro index=_internal log_level=ERROR event_message="*paraguas*" 21-12-2022 11:24:02.878 + 0000 ERROR PersistentScript [ 25593 PersistentScriptIo ] - De { /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella- addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistente } : archivo " /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/solnlib/credentials.py ", línea 133 , en obtener_contraseña 12-21-2022 11:24:02.878 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: return func(*args, **kwargs) 12-21-2022 11:24:02.878 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: File "/opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/solnlib/utils.py", line 128, in wrapper 12-21-2022 11:24:02.878 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last 12-21-2022 11:24:02.749 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: . 12-21-2022 11:24:02.749 +0000 ERROR PersistentScript [25593 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/TA_cisco_cloud_security_umbrella_addon_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA-cisco-cloud-security-paraguas - complemento # configs / conf - ta _ cisco _ nube _ seguridad _ paraguas _ complemento _ configuración , usuario = proxy .    
I am new to splunk and working on a complex query where; I am supposed to implement NOT IN functionality in SQL along with eval I want to skip all the IN-PROGRESS events who later went into COMPL... See more...
I am new to splunk and working on a complex query where; I am supposed to implement NOT IN functionality in SQL along with eval I want to skip all the IN-PROGRESS events who later went into COMPLETED state, and display all the events which are still in IN-PROGRESS state. For example COMPLETED events: event1 event5 event4 event7 IN-PROGRESS events: event3 event1 event4 Expected result event3 Given below are the queries to fetch COMPLETED and IN-PROGRESS events: index=abc message="*COMPLETED*" | eval splitStr=split(message, ",") | eval eventName=mvindex(splitStr,1) | table eventName index=abc message="*IN-PROGRESS*" | eval splitStr=split(message, ",") | eval eventName=mvindex(splitStr,1) | table eventName Thank you in advance.
Hello Splunk Community, I'm running a script using the splunk CLI to retrieve the required information. The script has previously been run multiple times without issue. I am now receiving the fol... See more...
Hello Splunk Community, I'm running a script using the splunk CLI to retrieve the required information. The script has previously been run multiple times without issue. I am now receiving the following error, but only for specific dates. FATAL: Invalid value "14/10/2022:2:0:00" for time term 'earliest' I can reproduce the problem in the graphical interface but if I change the date to '12/10/2022' the query is successful. Likewise, seaching for all logs for the date through the GUI returns the logs for the day. The script has already turned over the first 12 days of the month without error so the syntax is good, and the logs are indexed. Anyone have any ideas why I am receiving this error only for specific dates within the month? PS: Can also reproduce in a different month with the same dates. 12 returns results, 13 returns an error. Kind regards,
Hello Splunkers, I am currently having parsing problems with my Splunk Heavy Forwarder. I know I have heavy regex  that are causing Typing Queue problems, but I do not understand why "Splunk is not... See more...
Hello Splunkers, I am currently having parsing problems with my Splunk Heavy Forwarder. I know I have heavy regex  that are causing Typing Queue problems, but I do not understand why "Splunk is not taking more CPU" on my machine (CPU is always around 10-15%) Thanks a lot, GaetanVP  
Good evening, We are currently unable to connect to the following Splunk Cloud trial instance which shall expire next December 29th. Could you please investigate this issue?     15:51 $ curl ... See more...
Good evening, We are currently unable to connect to the following Splunk Cloud trial instance which shall expire next December 29th. Could you please investigate this issue?     15:51 $ curl -k -H "Authorization: Splunk a19b174b-9x9x-4e02-a83f-9999999999999" -v -d '{"index": "moacir-splunk-cloud-siem", "event": "blah blah blah","sourcetype": "_json" }' https://prd-p-ojiyn.splunkcloud.com:8088/services/collector/event * Trying 3.93.228.43:8088... * TCP_NODELAY set * connect to 3.93.228.43 port 8088 failed: Connection timed out * Failed to connect to prd-p-ojiyn.splunkcloud.com port 8088: Connection timed out * Closing connection 0 curl: (28) Failed to connect to prd-p-ojiyn.splunkcloud.com port 8088: Connection timed out     Warm regards,   Moacir
I have a table like this product_name test_result result_mv calc_output A 1 1 2 3 5 A 2 1 2 3 2 A 3 1 2 3 5 B 4 4 6 7 13 B 6 4 6 7 5 B 7 4 6 7 1... See more...
I have a table like this product_name test_result result_mv calc_output A 1 1 2 3 5 A 2 1 2 3 2 A 3 1 2 3 5 B 4 4 6 7 13 B 6 4 6 7 5 B 7 4 6 7 10   You can see thr MV field "result_mv". Is the outcome of   | eventstats list(test_result) by product_name And I have a customized func, for example: Σ( ( test_result - result_mv[index] ) ^2) Example of function output (calc_output): (1-1)^2 + (1-2)^2 + (1-3)^2 = 0+1+4 = 5 (2-1)^2 + (2-2)^2 + (2-3)^2 = 1+0+1 = 2 (3-1)^2 + (3-2)^2 + (3-3)^2 = 4+1+0 = 5 (4-4)^2 + (4-6)^2 + (4-7)^2 = 0+4+9 = 13 (6-4)^2 + (6-6)^2 + (6-7)^2 = 4+0+1 = 5 (7-4)^2 + (7-6)^2 + (7-7)^2 = 9+1+0 = 10 Bottom line, I need create the "calc_output" through "result_mv" by "product_name" .
Hi Everyone, I got a strange issue and unable to find a fix. All the indexes have a longer retention period but the oldest data is limited to 270 days. I checked the index cluster but did not fin... See more...
Hi Everyone, I got a strange issue and unable to find a fix. All the indexes have a longer retention period but the oldest data is limited to 270 days. I checked the index cluster but did not find anything which could be causing this issue. Here is the configuration for all indexes: [example1] coldPath = volume:primary/example1/colddb homePath = volume:primary/example1/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/example1/thaweddb frozenTimePeriodInSecs=39420043 Checked the index &  Indexers disk space and they are still space left for more data. Please let me know if anyone have similar experience or suggestion to increase the retention period. Thanks,
Dear splunkers, I would like to ask you that, I am looking for Splunk administration stuff any good source or website apart from splunk documentaion  Would be appropriate for your kind support 
Hi, I wanted to extract the field "login-first" and "delaccount" from result events. Following are 2 sample fields from the results logs. cf_app_name: AB123-login-first-pr cf_app_name: CD123-de... See more...
Hi, I wanted to extract the field "login-first" and "delaccount" from result events. Following are 2 sample fields from the results logs. cf_app_name: AB123-login-first-pr cf_app_name: CD123-delaccount-pr Sample query used : index=preprod source=logmon env="preprod" Please help me to extract the fields. Thanks in advance, SGL
Hi,   I need draw multiple histogram charts together(lined up like trellis) in one table. So I could compared multiple distributions at the same time.   I have a search that will get multiple pr... See more...
Hi,   I need draw multiple histogram charts together(lined up like trellis) in one table. So I could compared multiple distributions at the same time.   I have a search that will get multiple products' test result. Let's say I test 3 products A, B, C. Product A has 500 results, passed range is between -5~5, Max(result) = 8, min(result) = -6. B has 350 results, passed range is between -1~1, Max(result) = 1.5, min(result) = -1.2. C has 420 results, passed range is between 0~2, Max(result) = 2.2, min(result) = 0.3. I need draw histogram that has result on X axis, count on Y axis, with Density curve on it. I also need mark the Upper and Lower Limit line on every histogram, from bottom to the top on Y axis. Basically I need chart like this, with additional upper and lower limit lines. And put mutiple histograms in one panel. https://stackoverflow.com/questions/64467644/add-density-curve-on-the-histogram     And the bins number is fixed, let's say 10. Under this condition. The interval of bin will be dynamic. Every product's histograms bin will have different intervals. In this example, A's interval will be around 1.4. B's interval will be around 0.27. C's interval will be around 0.19 (but C's histogram's X axis range will be around 0~2.2, due to the lower limit line is 0 on X axis).
I have prometheus running in my existing setup for infra monitoring. I want to forward prometheus logs/metrices to splunk. So that I can monitor it in Splunk Cloud UI. How can we do that. I am not ab... See more...
I have prometheus running in my existing setup for infra monitoring. I want to forward prometheus logs/metrices to splunk. So that I can monitor it in Splunk Cloud UI. How can we do that. I am not able to find proper documentation with steps to do that. My current infra is running on aws eks. Please share if anyone has documentation regarding this
Hello community, Can anyone advise if it's possible to delete my search history? I'd like to delete old searches that serve no value e.g., those that returned no results, failed (i.e., were test se... See more...
Hello community, Can anyone advise if it's possible to delete my search history? I'd like to delete old searches that serve no value e.g., those that returned no results, failed (i.e., were test searches while learning) or are duplicates etc. I've searched helps docs and forums without luck.  Thank you for your help in advance. Pietra
Hi, Is there any current instructions on how to disable this error message that I keep receiving. Where can I edit the conf file to disable this error. I'm currently learning on a few virtual machi... See more...
Hi, Is there any current instructions on how to disable this error message that I keep receiving. Where can I edit the conf file to disable this error. I'm currently learning on a few virtual machines using VMWare workstation and do not need a huge data limits in place just for training purposes.    "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch."  
I found that I am the only user who has this situation. My role is admin. I thought it was a performance problem, but after solving the performance problem, I still can't run the real-time search, bu... See more...
I found that I am the only user who has this situation. My role is admin. I thought it was a performance problem, but after solving the performance problem, I still can't run the real-time search, but the scheduled search can run. How do I get myself to run a real-time search?    
Hi All, Could you please help in extracting the error log from java error log. I would like to see the result in a table format Code | Message 1234 | due to system error Error log is as be... See more...
Hi All, Could you please help in extracting the error log from java error log. I would like to see the result in a table format Code | Message 1234 | due to system error Error log is as below message: Exception Occurred ::org.springframework.web.client.HttpClientErrorException$BadRequest: 400 Bad Request: [{"code":"1234","reason":"due to system error.","type":"ValidationException"}] at org.springframework.web.client.HttpClientErrorException.create(HttpClientErrorException.java:303) at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:384) at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:325) ...... I have tried few extractions from splunk searches, however nothing were fruitful.
On both the business transaction page and also on dashboards I am having a problem with the HTML date control in the upper right of the screen. My issue is that the time will not stick in the in the... See more...
On both the business transaction page and also on dashboards I am having a problem with the HTML date control in the upper right of the screen. My issue is that the time will not stick in the in the control. If I use the drop down to set it to, e.g., 10 am, with will jump to 7 pm. If I set it to 6 pm, it might jump to 4 am. I think if I let my sessions completely expire and then come back into the app, the control starts working again. But usually every time while I am working it starts to get into the above messed up state, and I am unable to fix it and get the dates I want. It seems like a JavaScript error, but I think my personal session also might be retaining old dates or something like that.  Developer tools is giving me a lot of these errors: js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:120 Error: [$rootScope:infdig] http://errors.angularjs.org/1.5.11/$rootScope/infdig?p0=10&p1=%5B%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T20%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T14%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T20%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T14%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T01%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T19%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T01%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T19%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%7D%5D%5D at js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:7:426 at m.$digest (js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:146:243) at m.$apply (js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:148:363) at ViewUtil.safeApply (shared.webpack.min.js?7927b71ebc99ebb4583e4909474ca133:1:520742) at HTMLDocument.<anonymous> (MainAppModuleCode.webpack.min.js?7927b71ebc99ebb4583e4909474ca133:1:4740708) at HTMLDocument.dispatch (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:14:42571) at v.handle (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:14:40572) at e.invokeTask (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:11:24058) at Object.onInvokeTask (vendor.js?7927b71ebc99ebb4583e4909474ca133:1:5774148) at e.invokeTask (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:11:23979)
Can anyone offer any guidance on what fields would be considered 'required' for inserting a record into the TrackMe 'trackme_host_monitoring' lookup, and if any other supporting lookups would require... See more...
Can anyone offer any guidance on what fields would be considered 'required' for inserting a record into the TrackMe 'trackme_host_monitoring' lookup, and if any other supporting lookups would require insert/updates as well? We have been tasked with host monitoring, and have implemented TrackMe for a few indexes so far. Our manager wants us to check the TrackMe host activity against a 'source of truth'. For example, our Azure team uses a script to generate a list of all Azure hosts every night at midnight. We're monitoring that list and ingesting it into an index, after which we update a lookup table with the values we need. We figure that we can run a report each day that compares a list of hosts (in this case, Azure VMs, but this could apply to firewalls, etc.) from our 'source of truth' against the hosts present in TrackMe's trackme_host_monitoring lookup. The devil is in the details, but at the end of the day we figure we could insert the host into the TrackMe lookup if it wasn't present there. Any advice appreciated.
I'm fairly new to Splunk, so forgive me if this is an easy question. I'm trying to sum a field, and then sum as subset (top 10) of the same field so that I can get a percentage of the top 10 users ... See more...
I'm fairly new to Splunk, so forgive me if this is an easy question. I'm trying to sum a field, and then sum as subset (top 10) of the same field so that I can get a percentage of the top 10 users generating web traffic. I can get the individual searches to work no problem, but I can't get them to work together. Search 1: index=web category=website123 | stats sum(bytes) as total Search 2: index=web category=website123 | stats sum(bytes) as userTotal by userID | sort 10 -userTotal | stats sum(userTotal) as userTotal10 What I want to do is take those two results and do an eval percent=userTotal10/total*100 to give me a percentage. Essentially, I want to be able to show the percentage of traffic generated by the top 10 users. So far, I have not been able to figure out how to do that. Any help would be greatly appreciated.
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2    ... See more...
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2    below is my log4j2 xml file <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="https://localhost:8088" token="xxxx-xxxx-xxxx-xxxx" host="localhost" index="vehicle-api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true" > <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at INFO level --> <Root level="info"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration>   my pom.xml configurations related to splunk  <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.8.0</version> <scope>runtime</scope> </dependency> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories>   I am unable to see logs    Can any one help me .     Thanks in advance