All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by... See more...
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by Storefront it returns with the correct number of counts.  The  fields are created in statistics with no counts or names of the the netscalers, site, or user.   The second search does not return any statistical results.  Hoping to see the count of connections to the Storefront and its correlating NetScaler in a Sankey diagram.     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, count as count_Netscaler ] | appendpipe [ stats count by site | rename site as source, count as count_site ] | appendpipe [ stats count by UserName | rename UserName as source, count as count_UserName ] | fields source, count_Netscaler, count_site, count_UserName | search source=*     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, Storefront as target ] | appendpipe [ stats count by site | rename site as source, Netscaler as target ] | appendpipe [ stats count by UserName | rename UserName as source, site as target ] | search source=* AND target=* | stats sum(count) as count by source, target | fields source, target, count
Long story short - it's not possible. Hot/warm cannot be time-limited. As simple as that. However many fancy calculations you do based on average bucket sizes and so on, do a few restarts across your... See more...
Long story short - it's not possible. Hot/warm cannot be time-limited. As simple as that. However many fancy calculations you do based on average bucket sizes and so on, do a few restarts across your clusters or get some bad quality data and you end up with many small buckets rolling out of warm faster than you can say "bucket lifecycle". Anyway, it's relatively strange to see the same storage size allocated for hot/warm as for cold. Usually since cold is slower and cheaper there is way more of cold space than hot/warm. Of course keeping frozen stored for adequate period of time is up to you so you can easily script it to wait for X days before removing the exported buckets.
You're right. Come to think of it, my Dev licensed box also worked as DS. That's why I said to not quote me on that But seriously - the log suggests (you'd have to look in the code d0 verify) tha... See more...
You're right. Come to think of it, my Dev licensed box also worked as DS. That's why I said to not quote me on that But seriously - the log suggests (you'd have to look in the code d0 verify) that the app is trying to list indexers. And this API endpoint might indeed be not available with Dev license since it's a single instance installation only license.
For that, you need to dive very, very deep into semantics of your logs.  Ask your developers how to reconstruct a complete transaction from log entries.  And yes, read transaction, learn about its op... See more...
For that, you need to dive very, very deep into semantics of your logs.  Ask your developers how to reconstruct a complete transaction from log entries.  And yes, read transaction, learn about its options.  And practice on mock data.  Semantic problems have no shortcuts. If you don't want to go semantic, there is delta, and possibly streamstats that can give you lapsed time since the second-to-last event. (Which you put up as subject line for this question.)  However, my reverse engineering based on the sample logs you give gives me very low confidence that counting lines gives any meaningful measure.
Splunk will not automatically give a count or a percentage after search.  You need to show the command you use to get those three values.  This is why @ITWhisperer says you cannot find an answer with... See more...
Splunk will not automatically give a count or a percentage after search.  You need to show the command you use to get those three values.  This is why @ITWhisperer says you cannot find an answer without context. This emulation shows what Splunk gets after raw search.   | makeresults | eval _raw = "{ \"@t\": \"2024-08-14T13:34:42.1718458Z\", \"@mt\": \"{className}{methodName}{transactionId}{logLevel}@{LogController_LogMetricsAsync_request}\", \"className\": \"D:\\\\CW\\\\uploader\\\\Service\\\\LogController.cs_152\", \"methodName\": \"LogMetricsAsync\", \"transactionId\": \"d8e8e141-e9fc749abb0f\", \"logLevel\": \"Information\", \"LogController_LogMetricsAsync_request\": { \"action\": \"Device\", \"event\": \"Info\", \"loggerData\": [ { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"PlaybackAd\", \"adType\": \"Midpoints\", \"content\": \"Episode\", \"adId\": \"676697\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"PlaybackAd\", \"adType\": \"Third Quartiles\", \"content\": \"Episode\", \"adId\": \"676697\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"PlaybackAd\", \"adType\": \"Completes\", \"adId\": \"676697\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Midpoints\", \"content\": \"Episode\", \"adId\": \"CODE791\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Third Quartiles\", \"content\": \"Episode\", \"adId\": \"CODE791\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Completes\", \"content\": \"Episode\", \"adId\": \"CODE791\" }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"act\": \"NetworkBalance\", \"data\": { \"connectionType\": \"Wi-Fi\", \"routerInfo\": \"ARRIS\" } }, { \"schema\": \"1.0\", \"bv\": \"1.3.41\", \"dt\": \"CS\", \"adType\": \"Start\", \"content\": \"Episode\", \"adId\": \"635897\" } ] } }" | spath ``` data emulation above ``` | table LogController_LogMetricsAsync_request.loggerData{}.adType   The table I get is LogController_LogMetricsAsync_request.loggerData{}.adType Midpoints Third Quartiles Completes Midpoints Third Quartiles Completes Start There are seven values in this array.  Play with this emulation and plug subsequent search commands and find out what's wrong in those.  Or post your search to get volunteers to help. Side note: Python, for one, will not accept \\ in JSON.  Technically this is invalid in JSON document, but somehow Splunk takes it.
PickleRick Thank you for this information, I understand bukcetsm, indexes and Indexer work how data retention process workwith Splunik this is a virtual Splunk cloud environment (Splunk is inst... See more...
PickleRick Thank you for this information, I understand bukcetsm, indexes and Indexer work how data retention process workwith Splunik this is a virtual Splunk cloud environment (Splunk is installed on Cloud VMs), and we are NOT using SmartStore   just not sure how to config the Indexes.conf file / the individual indexer.conf stanza to reflex the data retention requirements  of  Hot/Warm for 30  month Cold for 30 months  frozen for 30 months    
Here is the JSON event, when I hover over on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting 3 values, as shown in the table, but  I see 5 entries ... See more...
Here is the JSON event, when I hover over on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting 3 values, as shown in the table, but  I see 5 entries of "adTypes" in the raw event.    Values Count % Completes 1 100% Midpoints 1 100% Third Quartiles 1 100%   here is the sample json, It is a huge json event, but truncated some data.     { "@t": "2024-08-14T13:34:42.1718458Z", "@mt": "{className}{methodName}{transactionId}{logLevel}@{LogController_LogMetricsAsync_request}", "className": "D:\\CW\\uploader\\Service\\LogController.cs_152", "methodName": "LogMetricsAsync", "transactionId": "d8e8e141-e9fc749abb0f", "logLevel": "Information", "LogController_LogMetricsAsync_request": { "action": "Device", "event": "Info", "loggerData": [ { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "PlaybackAd", "adType": "Midpoints", "content": "Episode", "adId": "676697" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "PlaybackAd", "adType": "Third Quartiles", "content": "Episode", "adId": "676697" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "PlaybackAd", "adType": "Completes", "adId": "676697" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Midpoints", "content": "Episode", "adId": "CODE791" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Third Quartiles", "content": "Episode", "adId": "CODE791" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Completes", "content": "Episode", "adId": "CODE791" }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "act": "NetworkBalance", "data": { "connectionType": "Wi-Fi", "routerInfo": "ARRIS" } }, { "schema": "1.0", "bv": "1.3.41", "dt": "CS", "adType": "Start", "content": "Episode", "adId": "635897" } ] } }        
Hi Rick - thanks for the reply. I think forwarder management is supported as I have a deployment server running on the same instance - i have created server classes and deployed app's via this so tha... See more...
Hi Rick - thanks for the reply. I think forwarder management is supported as I have a deployment server running on the same instance - i have created server classes and deployed app's via this so that aspect appears to be working.   My plan was to run stream forwarder on the all in 1 instance and deploy the Splunk_TA_Stream app to my UF's. Should this be possible?
It's not about Stream as such. As far as I remember (but I haven't used the Dev license for some time so don't quote me on that), the Dev license alleviate some limitations of the Free license (most ... See more...
It's not about Stream as such. As far as I remember (but I haven't used the Dev license for some time so don't quote me on that), the Dev license alleviate some limitations of the Free license (most importantly lets you have multiple users and schedule searches) but keeps some of them - single instance installation only and no forwarder management as far as I remember.
Hi there, i have a small lab at home on which I am running splunk enterprise 9.0.0 build 6818ac46f2ec and a developer license. The Licensing » Installed licenses page shows 3 valid licenses with the ... See more...
Hi there, i have a small lab at home on which I am running splunk enterprise 9.0.0 build 6818ac46f2ec and a developer license. The Licensing » Installed licenses page shows 3 valid licenses with the following information: . Splunk Enterprise Term Non-Production License creation_time 2024-08-11 07:00:00+00:00 expiration_time 2025-02-11 07:59:59+00:00 features Acceleration AdvancedSearchCommands AdvancedXML Alerting ArchiveToHdfs Auth ConditionalLicensingEnforcement CustomRoles DeployClient DeployServer FwdData GuestPass KVStore LocalSearch MultifactorAuth NontableLookups RcvData RollingWindowAlerts SAMLAuth ScheduledAlerts ScheduledReports ScheduledSearch ScriptedAuth SigningProcessor SplunkWeb SubgroupId SyslogOutputProcessor     is_unlimited False label Splunk Enterprise Term Non-Production License max_violations 5 notes None payload None quota_bytes 53687091200.0 sourcetypes   stack_name enterprise status VALID type enterprise window_period 30   Splunk Forwarder creation_time 2010-06-20 07:00:00+00:00 expiration_time 2038-01-19 03:14:07+00:00 features Auth DeployClient FwdData RcvData SigningProcessor SplunkWeb SyslogOutputProcessor hash FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD is_unlimited False label Splunk Forwarder max_violations 5 notes None payload None quota_bytes 1048576.0 sourcetypes   stack_name forwarder status VALID type forwarder window_period 30   Splunk Free creation_time 2010-06-20 07:00:00+00:00 expiration_time 2038-01-19 03:14:07+00:00 features FwdData KVStore LocalSearch RcvData ScheduledSearch SigningProcessor SplunkWeb hash FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF is_unlimited False label Splunk Free max_violations 3 notes None payload None quota_bytes 524288000.0 sourcetypes   stack_name free status VALID type free window_period 30   I would like to experiment with Splunk Stream for capturing DNS records before implementing in our production environment. I have installed Splunk Stream 8.1.3 and most of the menu's within the app work, however when I go to Configuration > Distributed Forwarder Management it just displays a blank page. When i look at the splunk_app_stream.log I can see the following error   2024-08-15 14:51:58,543 ERROR rest_indexers:62 - failed to get indexers peer Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_stream/bin/rest_indexers.py", line 55, in handle_GET timeout=splunk.rest.SPLUNKD_CONNECTION_TIMEOUT File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 612, in simpleRequest raise splunk.LicenseRestriction splunk.LicenseRestriction: [HTTP 402] Current license does not allow the requested action 2024-08-15 14:51:58,580 ERROR indexer:52 - failed to list indexers Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_stream/bin/splunk_app_stream/models/indexer.py", line 43, in get_indexers timeout=splunk.rest.SPLUNKD_CONNECTION_TIMEOUT File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.InternalServerError(None, serverResponse.messages) splunk.InternalServerError: [HTTP 500] Splunkd internal error; [] Does this mean that the splunk dev license does not support Splunk Stream app?
"Every device on the network" doesn't have to necessarily be identically configured. That's from experience. Also - we don't know your data, we don't know how your data is onboarded. Check your eve... See more...
"Every device on the network" doesn't have to necessarily be identically configured. That's from experience. Also - we don't know your data, we don't know how your data is onboarded. Check your events as they come with something like index=whatever_index_you're_using host=your_router | head 10 And run this over "all time (real-time)" - that's practically the only use case I've ever seen where real-time search is actually useful. See the timestamp in the event itself, see the timestamp Splunk uses (either parsed out of the event or not recognized and assumed to be something). That's to check if your data is OK. BTW, if all your routers' logs are getting indexed in the same index there is no way (unless you have a very botched distributed indexing setup which I assume you haven't) that data from the same index for those hosts is rolled and for other hosts is retained.
You haven't asked a question. I assume your base tstats search does work (I don't have data to test it). Anyway, your base search (the tstats alone) will give you possibly multivalued fields for whi... See more...
You haven't asked a question. I assume your base tstats search does work (I don't have data to test it). Anyway, your base search (the tstats alone) will give you possibly multivalued fields for which there is no relation between atomic values (this might be what you want but it often isn't). It will also give you data splt by fields you're not including in your table command. And you will probably get a lot of data as a result - almost as if you were searching raw data. It's not how you use base search - you'll probably get way too many base results to work with.
Hello, I'm not getting  the login to splunk cloud portal page after signing up for the free trial.  Can anyone help with this issue please?  Thank you
I had to remove these 2 lines from the very top because they emptied the _time column: | eval _time=strptime(date,"%m/%d/%Y") | fields - date But after that it works like a charm.  Thanks so much
Don't take it wrong but please leave this to someone who has the skills and experience. It's not something that cannot be done but you haven't even tried to test it in a lab environment but you're t... See more...
Don't take it wrong but please leave this to someone who has the skills and experience. It's not something that cannot be done but you haven't even tried to test it in a lab environment but you're trying to change your production env based on bits and pieces of advice you're getting on an internet forum. I suppose your already indexed data and work already done on your infrastructure is worth more than the bucks you're trying to save by either using external help (there are friendly Splunk Partners in every region) or at least investing in your own abilities by taking a course or two, digging through te docs, building and breaking a few lab environments. While this is not something that's difficult for a seasoned Splunk admin, there are several things that can go wrong and you wouldn't want to lose your data because of misconfiguring your indexes or something like that. And don't do multiple changes at the same time. Separating a search-head from existing AIO setup is one thing. Clustering indexers is another. Don't do too many things in one step.
I may be wrong (please contradict me if I am), but, I think this may still be one of the (many?) deficiencies of Studio when compared to SimpleXML - let's hope this and the other deficiencies are res... See more...
I may be wrong (please contradict me if I am), but, I think this may still be one of the (many?) deficiencies of Studio when compared to SimpleXML - let's hope this and the other deficiencies are resolved before Simple XML support is withdrawn!
See this presentation https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf It will tell you what you're dealing with.
Regardless of what M-21-31 is, there is a very important issue with your "config". The only time-based retention you can apply here is cold (assuming we're talking on-prem and we're not talking smar... See more...
Regardless of what M-21-31 is, there is a very important issue with your "config". The only time-based retention you can apply here is cold (assuming we're talking on-prem and we're not talking smartstore - that's territory I don't feel very comfortable with). Warm and hot are limited using different criteria. You can try to estimate their limits but that's only gonna be that - a rough estimate. Also - that will be the _limit_ and will tell you when data will _surely_ get rolled out to next tier and eventually to frozen whereas you most probably want it the other way - the limits under which the data will surely _not_ get rolled.
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the m... See more...
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the multiselect values stay populated when a different event is selected. The search for variable will update the dropdown, though. Is there a way to reset the selected variables when a different event is selected? I have seen the simple xml versions for this but haven't seen any information on how to do this in dashboard stuido. Any help is greatly appreciated. { "visualizations": { "viz_Visualization": { "type": "splunk.line", "dataSources": { "primary": "ds_mainSearch" }, "options": { "overlayFields": [], "y": "> primary | frameBySeriesNames($dd2|s$)", "y2": "> primary | frameBySeriesNames('')", "lineWidth": 3, "showLineSmoothing": true, "xAxisMaxLabelParts": 2, "showRoundedY2AxisLabels": false, "x": "> primary | seriesByName('_time')" }, "title": "Visualization", "containerOptions": { "visibility": {} }, "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "type": "auto", "newTab": false } } ] } }, "dataSources": { "ds_dd1": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype |dedup EventName \n| sort str(EventName)" }, "name": "dd1Search" }, "ds_mainSearch": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype EventName IN (\"$dd1$\") VariableName IN ($dd2|s$) \n| timechart span=5m max(Value) by VariableName", "enableSmartSources": true }, "name": "mainSearch" }, "ds_dd2": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "index=index source=source sourcetype=sourcetype EventName = \"$dd1$\" |dedup VariableName \n| sort str(VariableName)" }, "name": "dd2Search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_dd1": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd1" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd1" }, "title": "Event Name", "type": "input.dropdown", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"EventName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"EventName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_dd2": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd2" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd2" }, "title": "Variable(s)", "type": "input.multiselect", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"VariableName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"VariableName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_Visualization", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 653 } } ], "globalInputs": [ "input_global_trp", "input_dd1", "input_dd2" ] }, "description": "", "title": "Test" }  
| makeresults format=csv data="date,OTHER,arc,dev,test,prod 7/16/2024,5.76,0.017,2.333,2.235,19.114 7/17/2024,5.999,0.018,2.595,2.26,18.355 7/18/2024,6.019,0.018,2.559,1.962,16.879 7/19/2024,5.650,01... See more...
| makeresults format=csv data="date,OTHER,arc,dev,test,prod 7/16/2024,5.76,0.017,2.333,2.235,19.114 7/17/2024,5.999,0.018,2.595,2.26,18.355 7/18/2024,6.019,0.018,2.559,1.962,16.879 7/19/2024,5.650,018,2.177,1.566,14.573 7/20/2024,4.849,0.013,2.389,1.609,12.348 7/21/2024,4.619,0.013,2.19,1.618,12.296 7/22/2024,5.716,0.019,2.425,1.626,14.286 7/23/2024,5.716,0.019,2.425,1.626,14.286" | eval _time=strptime(date,"%m/%d/%Y") | fields - date ``` the lines above simulate the data from your loadjob (with 22nd duplicated to 23rd to give 2 Tuesdays) ``` ``` there is no need for the transpose as the untable will work with the _time and index fields in the other order ``` | untable _time index size | eval date=strftime(_time,"%F") | eval day=strftime(_time, "%a") | where day="Tue" | fields - day _time | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=date." change" | xyseries index date relative_size] | appendpipe [| xyseries index date size] | fields - date size relative_size | stats values(*) as * by index